Status Update
Comments
an...@google.com <an...@google.com> #2
Thank you for writing in. You have filed this as a bug, but from what I can see this is more of a Feature Request, would you like me to change the category of the issue to a Feature Request?
Could you add in a few examples of the requests you are making and the types of responses you are dealing with?
Can you describe a real-life example in which you use these requests and the quantified drawbacks of the current system - i.e. You mention that it is prohibitively expensive, how exactly do you mean? It takes too long? How long does it take?
Can you also clarify if your sheet goes over the 1 million cell limit of Google Sheets? You mention it has 200K+ rows, so if you have more than 5 columns, you are at the limit of what Sheets is intended for and you may want to consider moving to a database.
Finally, can you elaborate in your words the business impact that this new feature (if it is a feature request), would have on your business.
Thank you.
al...@google.com <al...@google.com> #3
Hi,
You asked a lot of questions, I tried to answer everything one by one. Let me know if I missed something.
You have filed this as a bug, but from what I can see this is more of a Feature Request, would you like me to change the category of the issue to a Feature Request?
I am currently migrating from V3, if I am not able to keep doing what I am doing in some way in V4, I consider this a bug, I think it would be a stretch to call this a feature.
Could you add in a few examples of the requests you are making and the types of responses you are dealing with?
Here are some basic minimalistic examples. The theme is the same, I want to append a new row to the end of the spreadsheet. Simple with V3, impossible with V4 to do it correctly.
Example 1: Values.Append
Add a new row to the sheet. Given data in row A1 "<empty>, something", and using Values.Append and inputRange="A1", the new row will be inserted to A1 instead of A2.
To fix this let's change inputRange to 1:1. Given data in row A1 "<empty>, something, <empty>, something", the new will be inserted into D2 instead of A2.
Or I could use range A:Z, but that also has the same issue, any empty row and column would make the insert misaligned and not start from column A.
I couldn't find any solution in V4 that would work in all cases that might occur in our spreadsheets. We haven't had such problems with V3.
Example 2: AppendCellsRange
Let's assume you fixed
I want to add a new row and return it's position and the inserted data. AppendCellsRange will add the row to the end of the sheet, but does not return the data.
I would need to make Values.Get request to read the full sheet. But I have a sheet with 160K rows and 16 columns. Instead I can do mutliple Values.Get, let's say 16x10K rows and try to search for the data, but this takes 16x the roundtime + it will hit the quota limit very fast and lock out my user for a while.
Can you describe a real-life example in which you use these requests and the quantified drawbacks of the current system - i.e. You mention that it is prohibitively expensive, how exactly do you mean? It takes too long? How long does it take?
The real-life example is Google Sheets integration with our own system. This is a feature we provide to some of our customers who need this. Our customers manage records of data in their spreadsheet, one record per row. While they use our system, they add and update rows in their sheet in real time.
The workflow looks like this:
- User navigates to a website and uses our service to store data from this website to a row in their spreadsheet, a new row to the end of the sheet, the problem I described above applies here.
- User navigates to another site, and does the same.
- User comes back to the first site and expects to see their data loaded from their spreadsheet in our service. So we need to search the sheet for the identifier and read the row, with V4 this is almost impossible, so we had to add complicated logic to index the spreadsheet and trying to refresh this inde as needed to speed up searches. Otherwise, or during indexing, we read the full sheet in batches of 10k rows, but without the index (cache) this dies on quota limits pretty quickly even after they got bumped by quite a lot, plus it takes a lot of time.
- User edits the loaded data and saves it back to the sheet. With V4 this is also become worse, We have to calculate checksums and try to to do a Get just before the Update and check for any concurrent changes, while with V3 the concurrent changes where handled automatically using eTags.
So in a nutshell this is our business logic for these customers.
You mention that it is prohibitively expensive, how exactly do you mean? It takes too long? How long does it take?
I think I explained this, reading the full sheet takes too much time and Google APIs are also timing out quite a lot, so we try to make more reliable calls by splitting the sheet into more predictable batches of fixed sizes, but this increases the number of API calls, again quota and latency.
It takes too long? How long does it take?
I hope you are aware of your own APIs performance, but in case not here are some numbers.
This is only for retrieving 4 columns using BatchGetValuesByDataFilterRequest.
Batch Size | Memory | Latency | Total latency (100k rows) |
---|---|---|---|
1k | 4-5 MB | 0.4s | 40s |
5k | 6 MB | 0.6s | 12s |
10k | 11 MB | 0.7s | 7s |
20k | 20 MB | 1s | 5s |
30k | 28 MB | 1.2s | 4s |
50k | 44 MB | 1.3s | 2.6s |
100k | 84 MB | 2s | 2s |
We handle a lot of users, so latency is only part of the issue, the other is keeping memory usage reasonable. So we cannot read the full spreadsheet of all users into memory every time. Consider sheets with 16 columns and 100K+ rows.
Here is an alternative using Spreadsheets.Get
Batch Size | Memory | Latency | Total latency (100k rows) |
---|---|---|---|
1k | 40 MB | 1.5s | 150s |
10k | 400 MB | 8s | 80s |
This is unusable, the Google Java SDK creates loads of objects and it is very slow here too.
Can you also clarify if your sheet goes over the 1 million cell limit of Google Sheets?
Are you sure about the 1 million cell limit? We have spreadsheets with over 1M cells that work without problems (especially with V3).
Please see here
Up to 5 million cells or 18,278 columns (column ZZZ) for spreadsheets that are created in or converted to Google Sheets.
So your service seems to have a limit for 5 million cells, so my example sheet of 200K rows and 16 columns is way below the limits.
You mention it has 200K+ rows, so if you have more than 5 columns, you are at the limit of what Sheets is intended for and you may want to consider moving to a database.
Again, your comment about the limit is false. The limit is 5 million and we are well aware of it, but most of our users are well below this limit, but a 200K row, 16 column sheet, which is well within limits of Google Sheets, is still to large to be read in full on every operation.
I cannot ask my users who are individuals without technical background to swap their spreadsheets with a database. They want to keep records in their spreadsheets, Google Sheets V3 works perfectly fine, V4 breaks a lot of this integration.
Finally, can you elaborate in your words the business impact that this new feature (if it is a feature request), would have on your business.
From my POV this is a bug in V4 as it breaks functionality that used to work in V3 and as I mentioned above, to consider appending a single row to the end of the spreadsheet as a new feature sounds like a stretch to me.
The business impact of not having this and losing V3 is that we might need to abandon our Google Sheets integration and risk losing around 2K customers.
ca...@google.com <ca...@google.com> #4
Thank you very much for the extensive details on the context around this request, it is very helpful.
Apologies about the 1 million cells comment, I am not sure where I got that from but you are absolutely right about the 5 million cell limit.
That said, from what I can make out, there are three main things your issue is about.
-
spreadsheets.values.append
does not return the row in which it added the values just appended, and it does not include the values. -
spreadsheets.values.append
sometimes does not detect or input values in the right place. -
spreadsheets.values.append
does not support more than 4 concurrent requests.
For 2, this has already been raised here
For 3, as you have already found, has another issue that is being dealt with separately.
Ensure to "star" these issues so that we know more people are affected and this will also subscribe you to updates.
For number 1, could you please review this example:
- Create a new sheet.
- In range B6:E9 manually insert [[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]
- Go to
https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append and use the "Try this API" and input:
- the Spreadsheet ID
- the range string
Sheet1!B6
includeValuesInResponse
astrue
valueInputOption
asRAW
- and the request body as below:
{
"values": [
[
1,
2,
3
]
]
}
This for me returns a response:
{
"spreadsheetId": "[SS_ID]",
"tableRange": "Sheet1!B6:C9",
"updates": {
"spreadsheetId": "[SS_ID]",
"updatedRange": "Sheet1!B10:D10",
"updatedRows": 1,
"updatedColumns": 3,
"updatedCells": 3,
"updatedData": {
"range": "Sheet1!B10:D10",
"majorDimension": "ROWS",
"values": [
[
"1",
"2",
"3"
]
]
}
}
}
Which as you can see in updatedRange
contains the range of the new cells, and this information is repeated in updatedData
which also contains the values that were just appended.
Or I may be missing something?
an...@google.com <an...@google.com> #5
Hi,
Thank you very much for the quick response, you are close, but some of the points are not related to the issue I was describing.
For number 1. spreadsheets.values.append
does return the inserted values and its position, I already use that and works perfectly well. But I was also testing spreadsheets.batchUpdate
with AppendCellsRequest
which works well for inserting rows to the expected last row, but it does not return the inserted data.
Your point in number 2 is not exactly what I was reporting either. I was not even aware of those edge cases reported in spreadsheets.values.append
(talking about the expected behavior, not bugs) does not provide a way to specifically insert a row starting from the first column after the last data row regardless of what rows and columns are missing above it.
To summarize, Google Sheets V4 provides 2 ways to append data:
spreadsheets.values.append
that returns inserted values, but it is not possible to use it to insert data after the last data row starting from the first column without a suitable arrangement of the preceding rowsspreadsheets.batchUpdate
'sAppendCellsRequest
does insert rows after the last data row starting from the first column as we need, but it does not return the inserted values and range.
In the end this means that none of the options provided by V4 are correct alternatives to V3's insert
operation and for our use case.
From the above two requests I prefer to use spreadsheets.values.append
that is faster and returns the inserted data and range. If we could solve the problem of inserting rows into the first column after the last data row, it would be awesome. Alternatively, if AppendCellsRequest
would return the inserted data or range, that would work for us too.
To summarize my suggestions:
- A. Add an option to
spreadsheets.values.append
to allow inserting data from the first column after the last data row such asAppendCellsRequest
regardless of table arrangements. - B. Or add a new request e.g.
spreadsheets.values.appendCells
that behaves likeAppendCellsRequest
but returns the inserted data and range just likespreadsheets.values.append
- C. Or make
AppendCellsRequest
return anAppenCellsResponse
that returns the inserted data and range on request.
Basically, you have 2 types of append requests that behave differently, what we need is an append request that is the combination of the two which would behave similarly to V3's insert
operation.
This is not specifically a bug in the implementation. What I am reporting is a bug in the design of V4 that either does not allow deterministic append behavior if the arrangement of data is not know to the caller or does not return the inserted data and range, and thus in the end does not provide a good alternative for migration from V3's insert
operation.
ja...@compal.corp-partner.google.com <ja...@compal.corp-partner.google.com> #6
Another datapoint on why this is considered a bug is based on the Google announcement here
We want to make sure that projects originally built on the v3 API continue working long after it is gone.
This is the only thing we want too, but sometimes V4 makes it hard to do so and already forced us to make several workarounds, so that I cannot fully agree with the below statement from the same announcement.
As part of the migration to the Sheets v4 API, which provides a better developer and user experience, we will be retiring the Sheets v3 API
nk...@google.com <nk...@google.com> #7
Thank you for the further information.
Let me try and summarize this in my words to see if we are in agreement:
You are reliant on the behavior of appending cells in v3 (specifically the insert
behavior).
The task is, in a very large spreadsheet, to insert a row at the end of the sheet and have the response include the data inserted and its location in the sheet.
v4 offers two methods to do this, spreadsheet.values.append
and spreadsheets.batchUpdate
using the appendCells
request and both of these methods have slightly different behavior.
spreadsheet.values.append
will try to automatically detect the table in the range that is given as a non-optional parameter and will append the data after the last line of the "table". This does return the the data inserted and its location/
spreadsheets.batchUpdate
using the appendCells
request, does not seem to detect tables in this way, and will just detect the last row that has data and append it to that starting in row A
. This can return the position and values, but only in the form of the whole spreadsheet resource.
For your use case, neither is suitable. spreadsheet.values.append
does not work because you may have some empty rows and so it might not insert values at the end of the table. spreadsheets.batchUpdate
is not suitable because the spreadsheet resource is too large.
Is that accurate?
an...@google.com <an...@google.com> #8
Yes, your summary is spot on.
spreadsheets.batchUpdate
is not suitable because the spreadsheet resource is too large.
I would just add that it also does not return the appended row position and data and hence the only other option is to retrieve the sheet which is too large to do so on a regular basis.
lo...@compal.corp-partner.google.com <lo...@compal.corp-partner.google.com> #9
Thank you. I have raised this internally. Once we have some updates, we will post them here.
an...@google.com <an...@google.com> #10
Nathan,
Would you please evaluate the changes proposed in comment9 and let us know if you see any problems/issues apparently.
Meanwhile, Compal did the VIF comparison with Brya. Do we have anyone who can take a look at the PDO configuration in comment9?
Note: Compal has no chances to run full validation on PD compliance with the changes in #9 since the debug/validation time is out of the timeframe of GRL EV test contract. It would be appreciated if you could take a look from code flow perspective and shed light on it.
Thank you.
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #11
Update our time plan for this issue:
We will arrange to retest PD2.0 & PD3.0 with potential fix in EV2 (EV2: 7/22~8/1).
We would like to have a potential fix for PD2.0 before 7/15.
Thanks!
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #12
Could you please check #9 #10 and see if there's any comment or suggestion from your side?
Thanks a lot!
an...@google.com <an...@google.com> #13
Hi nkolluru@, Friendly ping.
nk...@google.com <nk...@google.com> #14
RE#13:
ACK, reviewing logs now.
nk...@google.com <nk...@google.com> #15
There are a couple of issues here, documenting now so I don't forget.
These issues aren't strictly related to the failure, but they emerged during review.
Please don't view this as a comprehensive analysis of this issue.
(I will refer back to these numbers when stating issues further down.)
- There are no bounds-checking for "USB-PD revision" when extracting the power field bits in crrev/c/3592313
https://chromium-review.googlesource.com/c/chromiumos/platform/ec/+/3592313/12/common/usbc/usb_pd_dpm.c#736 - There are no checks for CapMismatch ("op_curr" versus "max_curr") fields
https://chromium-review.googlesource.com/c/chromiumos/platform/ec/+/3592313/12/common/usbc/usb_pd_dpm.c#737 - There are no checks (or accounting) for Sinks with "GiveBack" support
- There are no checks (or accounting) for Sinks that issue REQUEST message (CapMismatch=0)...
...with OpCurr of "1.5A (I will use this right now)"...
...but MaxCurr of "3.0A (I will request up to this amount very soon)".
https://source.corp.google.com/search?q=common%2F%20RDO_CAP_MISMATCH%20-fpmcu&sq=&ss=piper%2FGoogle%2Fchromeos_public
The lack of RDO_CAP_MISMATCH
checking in DPM static void balance_source_ports(void);
is most concerning.
Because this bit changes the decoding of the entire packet, and the meaning of the fields.
A TBT3 partner may see 1.5A -- issue Request with 0.5A OpCurr, but 3.0A "DesiredCurr".
And the Chromebook will never "balance" to give it more, (based on my current reading of code).
https://source.corp.google.com/chromeos_public/src/platform/ec/common/usbc/usb_pd_dpm.c;rcl=c020eb91f162c7d77aa6ffba0f0b97412e371f30;l=635
It seems sink_max_pdo_requested
is only set based on the SinkCap.
Not even the "Request (CapMismatch=1) (DesiredCurr=3.0a)", if there is one.
sink_max_pdo_requested
indpm_evaluate_sink_fixed_pdo()
function.
https://source.corp.google.com/chromeos_public/src/platform/ec/common/usbc/usb_pd_dpm.c;rcl=c020eb91f162c7d77aa6ffba0f0b97412e371f30;l=769 PDO_FIXED_CURRENT
macro
https://source.corp.google.com/chromeos_public/src/platform/ec/include/ec_commands.h;rcl=94f92cd7f12736e9b59e0fd399d5663789019b22;l=6858
This seems like a bug.
nk...@google.com <nk...@google.com> #16
Reading from [06-25-21] USB PD Compliance Package
(no longer on USB-IF usb.org Documents site)...
...directory name PD_2.0_Compliance_Package_09262018
...
...file name Deterministic PD Compliance MOI 1.pdf
:
There is not support in Compliance testing for "Dynamic Source Port Balancing".
In fact, "Shared Capacity" testing is poorly written, or not included at all.
3.13
POWER SOURCE/SINK PRIMARY TESTS3.13.1
TDA.2.3.1.1: BMC-POW-SRC-LOAD-P-PC Source Dynamic Load Test, Provider or Provider/Consumer(...)
- The Tester gets the UUT into PD Mode (PROC-PD-MODE), initially requesting PDO#1 at 100mA.
- Wait until a Source Capabilities message is received, note the number of Power Data Objects, and record their contents.
a. Check that they are identical to the list provided by thevendor [BMC_POW_SRC_LOAD_P_PC_1].
b. If at any time during the following steps a further Capabilities message is received, the PDOs shall be compared to the previous ones.
c. If they differ, report the details, and the test ends as a failure.
CrOS EC dynamic balancing (as I read/interpret) will fail
- (5a) due to first advertising [5v/1.5a] (pre-Request)
- (5b) if for some reason GRL's "100ma" REQUEST [with unknown CapMismatch/MaxCurr] or SNKCAP [unknown] triggers change
- (5c) due to readvertisement/balancing, period.
I'm not sure what the AI is here. PD2.0 compliance is deprecated.
There's not really a way to "fix" this, since USB-IF CTS test plan is frozen.
At best, we could add hacks/"compliance mode" in CrOS EC to work around this and "get the green light to light up on the tester box".
- Also, we won't be addressing /actual/ failures like illustrated in RE#15.
Where do we want to devote our Engineers' valuable time, and labor here, ansontseng@?
an...@google.com <an...@google.com> #17
Hi Intel team,
Meanwhile, Would you please confirm again on the request of PD 2.0 complience? Given the PD spec keeps evolving and the PD2.0 complience is out of date, could you help explore the TBT cert requirment with PD 3.0 complience only? Feel free to let us know if you have any concnerns.
Thank you.
nk...@google.com <nk...@google.com> #18
Per GVC with ansontseng@:
Shortest path forward is a "compliance mode" command in EC.
That forces policy flag to 3A (on until disabled/always-on) for port under test.
Again, any sort of dynamic CB port behavior will run afoul of legacy test step (5c).
- There are other "proper" fixes discussed, such as coding full-blown "Shared Capacity" and "GoToMin" features.
- But these are "Feature Requests", not "bug fixes". So likely will be punted to a future CrOS EC / Zephyr release.
an...@google.com <an...@google.com> #19
Thank you Nathan for the analysis and technical advice.
Abe / Caveh,
It seems a problem across Brya devices based on the current CrOS dynamic balancing mechanism. While we are waiting for the clarification(#17) from Intel TBT team, would you please evaluate #18 with a compliance mode command in EC to workaround the 5V/3A test cases PD 2.0 TDA.2.3.1.1 and TDA.2.3.1.2. or any possible workaround to help partners proceed?
en...@intel.com <en...@intel.com> #20
Hi Anson,
The TBT4 EV requires PD3.0 qualification and the PD3.0 defined by USB-IF is designed to be fully interoperable with PD2.0.
an...@google.com <an...@google.com> #21
Added Brya TPM Chinmay and TL Alex for vis.
Hi Abe,
I temporarily assign this to you since Caveh is OOO. Please feel free to point me to the right owner from EC sied to explore any workaround as short-term solution to mitigate the PD issue. The same issue is suppoesed to be raised on brya devices or future devices based on the analysis in #15 amd #16.
Thank you.
al...@google.com <al...@google.com> #22
The TBT4 EV requires PD3.0 qualification and the PD3.0 defined by USB-IF is designed to be fully interoperable with PD2.0.
Enzo: I think Nathan's point in
Looking through USB PD CTS 1.4 v3 OR, 2022-04-22, I think the analogous tests (run with PD 2.0 as well as PD 3.0) would be something like COMMON.CHECK.PD.7 Check Source Capabilities Message
, which is performed in various named tests, for instanceTEST.PD.PS.SRC.2 PDO Transitions
.
I think we would probably fail COMMON.CHECK.PD.7
for the same reason as we fail TDA.2.3.1.2
. Specifically, we would fail step 4.k B9...0 (Maximum Current), case ii.1
, which expects Maximum Current
equal to Src_PDO_Max_Current1
. But at least it's possible that this test could be changed to accommodate our behavior, which I believe is actually compliant with the PD 2.0 spec in this regard.
Shortest path forward is a "compliance mode" command in EC.
Would you want an EC shell command? A host command with a way to trigger it from ectool
? Would the condition need to survive a reboot?
dz...@google.com <dz...@google.com> #23
I would propose that instead of a "compliance mode" command, we:
- Correct our VIF to report we're a shared power source
- Implement the BIST Shared Test Mode commands required for a shared power source
This would bring our stack into greater compliance, and should ensure the suite runs more smoothly if I read TEST.PREP.PR.1 Preparation for Bring-up Source UUT
correctly in the most recent compliance test spec.
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #24
Since the deadline is getting closer, could Google help to provide more advice or is there any potential fix?
Thanks a lot!!
an...@google.com <an...@google.com> #25
Thank you dzigterman@ and alevkoy@. Will you work on the changes proposed in #23?
dz...@google.com <dz...@google.com> #26
I may have cycles to work on this in the next couple of weeks if there are resources available to test out the code and suggested VIF changes.
How urgent is this issue? If it's immediately blocking an essential board phase, then perhaps a brya EC engineer should work on it
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #27
Since the TBT cert. EV2 Lab test will start at 7/22, is it possible provide the new VIF on 7/21?
EV2 test schedule and results (7/22~8/1) will impact TBT cert. FV (8/4~8/19, but based on EV2 pass)
Thanks!
al...@google.com <al...@google.com>
dz...@google.com <dz...@google.com> #28
Caveh, do you have time to take a look at this?
To implement BIST shared test mode, we'd want:
- DPM changes to support the mode
- APIs to enter and exit BIST shared test mode
- APIs could attempt to either change around the variables tracking which ports may use maximum power, or short circuit all of the port power balancing considerations when in shared mode
- PE changes for shared test mode
- New processing to
pe_bist_tx_entry()
to enter and exit shared test mode (reference spec section6.4.3.3 BIST Shared Capacity Test Mode
) - Shared test mode should cause us to return to a Ready state, not remain in the BIST state
- New processing to
- VIF changes to correctly report that we're a shared charge source
- We'll probably want to make all ports capable as acting as a "master" port, but I'll leave that to you
- Ports should report themselves as "shared" and within the same gang
- Gang power should be the maximum power we can supply in total
I don't believe the GotoMin feature mentioned in Table 6-76 Applicability of Control Messages
.
ca...@google.com <ca...@google.com> #29
hi,
no spare cycles here - mostly working on ghost/squall these days.
based on #28, this looks complex enough where someone more involved in our PD stack should take over.
dz...@google.com <dz...@google.com> #30
Does brya have any assigned EC engineers at this point?
an...@google.com <an...@google.com> #31
Hi @chinmaym @levinale,
Since it's an issue across all Brya siblings(#15 and #16), would you please help to check the internal resource from Brya team? The issue not only block Felwinter TBT sku but also impact the furure devices which may apply for TBT cert.
Thank you.
dz...@google.com <dz...@google.com> #32
If brya doesn't have any assigned resources, I can try coding this next week after we've sorted out our skyrim proto boot issues.
an...@google.com <an...@google.com> #33
Thank you Diana for the help. It would be appreciated if we could make some progress next week. Thanks!
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #34
Is there any progress or update that Google can share with us?
Thanks for your help!!!
dz...@google.com <dz...@google.com> #35
I plan to get the BIST shared mode coded with a unit test in the next day or so. I don't have the ability to execute an actual compliance run this week, but I'll post recommendations for how the VIF should change to indicate our shared power pool when the code is posted.
dz...@google.com <dz...@google.com> #36
EC change to enable BIST shared mode here:
VIF changes suggested:
- Master_port set to "true" for all ports
- Product_Total_Source_Power_mW set to (number of USB-C ports * 7.5) + 7.5 W (ex. 22.5 W for a 2-port system)
- Port_Source_Power_Type "shared" for all ports
- Port_Source_Power_Gang set to some shared name for all the ports
- Port_Source_Power_Gang_Max_Power set to the same as the Product_Total_Source_Power
lo...@compal.corp-partner.google.com <lo...@compal.corp-partner.google.com> #37
Re #36
-
Master_port set to "true" for all ports.
-
Product_Total_Source_Power_mW set to (number of USB-C ports * 7.5) + 7.5 W (ex. 22.2 W for a 2-port system)
These modification as attached
<vif:Master_Port value="true" />
Product_Total_Source_Power_mW value="23500">23500 mW</vif:Product_Total_Source_Power_mW>
I have no idea to modify bellowing item? What mean "shared" in Port_Source_Power_Type? Could you please share some advice?
* Port_Source_Power_Type "shared" for all ports
* Port_Source_Power_Gang set to some shared name for all the ports
* Port_Source_Power_Gang_Max_Power set to the same as the Product_Total_Source_Power
dz...@google.com <dz...@google.com> #38
Those will come from the "Product Power" section. The type is this one right now:
<vif:Port_Source_Power_Type value="0">Assured</vif:Port_Source_Power_Type>
And would need to be "shared". I believe once you've selected that, the VIF editor should prompt you for the power gang information. I attached a screenshot of the relevant VIF spec section for your reference.
lo...@compal.corp-partner.google.com <lo...@compal.corp-partner.google.com> #39
Re #38
Attached is the VIF that modified.
Is that setting correctly?
Thanks
dz...@google.com <dz...@google.com> #40
I believe your power should be 22500 total rather than 23500 since that gives 15W for one port + 7.5 W for the other port. Otherwise, it looks good! Let me know how it runs on the compliance tester.
ap...@google.com <ap...@google.com> #41
Branch: main
commit 83f85e3648f515896626a5e21595109dd5d824b7
Author: Diana Z <dzigterman@chromium.org>
Date: Wed Jul 27 09:09:21 2022
TCPMv2: Add BIST shared mode
Systems which have a shared power reserve over ports are required to
implement BIST shared test mode. This mode will force us to advertise
more current than we can actually support, but it is only for test
purposes and the tester should not actually draw past our VIF declared
maximum.
BRANCH=None
BUG=b:237256250
TEST=zmake testall
Signed-off-by: Diana Z <dzigterman@chromium.org>
Change-Id: Iacb17e0b3eb14c5b38220c7008aa3d2a8f0607a9
Reviewed-on:
Commit-Queue: Abe Levkoy <alevkoy@chromium.org>
Reviewed-by: Abe Levkoy <alevkoy@chromium.org>
M test/fake_usbc.c
M common/usbc/usb_pd_dpm.c
M include/usb_pd.h
M common/mock/usb_pd_dpm_mock.c
M common/usbc/usb_pe_drp_sm.c
M include/usb_pd_dpm.h
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #42
As discussed on the 4 way meeting, we need Google's help to build a FW including TBT PD2.0 formal solution (
Could you please help on the build?
Thanks for your help!
ap...@google.com <ap...@google.com> #43
Branch: main
commit 84b96ab02b35d66d87f1bbf6a850a71267a018d1
Author: Diana Z <dzigterman@chromium.org>
Date: Wed Jul 27 15:31:26 2022
Zephyr test: Add BIST shared mode test
Add a test for BIST shared mode to ensure we're properly following our
spec requirements for it.
BRANCH=None
BUG=b:237256250
TEST=zmake testall
Signed-off-by: Diana Z <dzigterman@chromium.org>
Change-Id: If476fb5faed328c6e9fc4c94db0484f3166b357e
Reviewed-on:
Reviewed-by: Abe Levkoy <alevkoy@chromium.org>
M zephyr/test/drivers/default/CMakeLists.txt
A zephyr/test/drivers/default/src/integration/usbc/usb_pd_bist_shared.c
ap...@google.com <ap...@google.com> #44
Branch: main
commit 6e157ac150249f8962abdeac957c9ca71dc9445d
Author: Diana Z <dzigterman@chromium.org>
Date: Wed Jul 27 14:52:14 2022
Zephyr test: Store last 5V fixed source cap for reference
The 5V fixed source cap may have a number of testable fields we'd be
interested in, such as the power offered or static capabilities
advertised. Store it for the tests to reference, and allow them to
clear it when desired.
BRANCH=None
BUG=b:237256250
TEST=zmake testall
Signed-off-by: Diana Z <dzigterman@chromium.org>
Change-Id: Ic396be8ca30ba5f1a86c1da1fe60a7a4c66dbea1
Reviewed-on:
Reviewed-by: Abe Levkoy <alevkoy@chromium.org>
M zephyr/include/emul/tcpc/emul_tcpci_partner_snk.h
M zephyr/emul/tcpc/emul_tcpci_partner_snk.c
ap...@google.com <ap...@google.com> #45
Branch: main
commit 0b5019e0c65fe0f66ea77ec37531b83705cb58ec
Author: Diana Z <dzigterman@chromium.org>
Date: Wed Jul 27 14:31:05 2022
Zephyr test: Create a shared sink connection utility
Many tests are repeating essentially the same code to connect a sink.
Make a common utility for them all to reference.
BRANCH=None
BUG=b:237256250
TEST=zmake testall
Signed-off-by: Diana Z <dzigterman@chromium.org>
Change-Id: Ic7bb083992b67414e58c8b8fb932e8f7f58c8f29
Reviewed-on:
Reviewed-by: Aaron Massey <aaronmassey@google.com>
M zephyr/test/drivers/common/include/test/drivers/utils.h
M zephyr/test/drivers/default/src/console_cmd/charge_manager.c
M zephyr/test/drivers/usb_malfunction_sink/src/usb_malfunction_sink.c
M zephyr/test/drivers/common/src/utils.c
M zephyr/test/drivers/default/src/integration/usbc/usb_5v_3a_pd_sink.c
ri...@google.com <ri...@google.com> #46
re #42,
Hi Annie,
The 14505.192.0 is on CPFE now, but it not include the cls in #43~#45.
ap...@google.com <ap...@google.com> #47
Branch: firmware-dedede-13606.B-master
commit 778fd7e2d319624ffc090e0388a1159da6798da9
Author: Aseda Aboagye <aaboagye@google.com>
Date: Wed Aug 10 12:15:18 2022
Merge remote-tracking branch cros/main into firmware-dedede-13606.B-master
Generated by: ./util/update_release_branch.py --baseboard dedede --relevant_paths_file
./util/dedede-relevant-paths.txt firmware-dedede-13606.B-master
Relevant changes:
git log --oneline 84e4e5863f..be9d663832 -- baseboard/dedede
board/beadrix board/beetley board/blipper board/boten board/bugzzy
board/corori2 board/cret board/drawcia board/drawcia_riscv board/galtic
board/kracko board/lantis board/madoo board/magolor board/metaknight
board/pirika board/sasuke board/sasukette board/shotzo board/storo
board/waddledee board/waddledoo board/wheelie common/charge_state_v2.c
common/mkbp_* common/ocpc.c common/usbc/usb_tc_drp_acc_trysrc_sm.c
common/usbc/usb_sm.c common/usbc/*_pd_* common/usbc/dp_alt_mode.c
common/usbc/usb_prl_sm.c common/usbc/usb_pe_drp_sm.c
common/usb_charger.c common/usb_common.c common/usbc_ocp.c
driver/charger/sm5803.* driver/charger/isl923x.* driver/tcpm/raa489000.*
driver/tcpm/it83* include/power/icelake.h include/intel_x86.h
power/icelake.c power/intel_x86.c util/getversion.sh
4342d4ee61 shotzo: Remove hdmi hpd pin
5ec5a53cc7 shotzo: Porting recovery botton
ce9fb3e74b shotzo: Porting led
1384c24972 shotzo: Configure LCD backlight driver OZ554
9fc4b345d7 shotzo: Configure barrel jack adapter
972d948059 shotzo: Remove charge port c1
a639c13eca util: remove unused includes
3897c58004 Revert "mkbp: don't queue mkbp events in S3"
452460d535 shotzo: Remove unused features
7264165cad sm5803: Add support for board with only one charger chip
8a2a05c677 sm5803: Fix failed to read VBUS after resuming from hibernation
83f85e3648 TCPMv2: Add BIST shared mode
823f865151 usbc-pd: Allow setting an initial debug level
0b5d4baf5a util/getversion.sh: Fix empty file list handling
633b722d46 RAA489000: Add extra registers for tcpci dump
BRANCH=None
BUG=b:234426826 b:236325357 b:234665044 b:238057993 b:235791717
BUG=b:240541974 b:237256250 b:240574048 b:240506854 b:235983675
BUG=b:241215360
TEST=`make -j buildall`
Signed-off-by: Aseda Aboagye <aaboagye@google.com>
Change-Id: I2a0d299e343f500199e645f5b9f456ccaae0509b
ca...@google.com <ca...@google.com> #48
now that BIST shared mode
support has landed,
should we enable Master_Port
in our VIF files (and update genvif)?
dz...@google.com <dz...@google.com> #49
Yes, for boards that have a non-zero CONFIG_USB_PD_3A_PORTS
Compal, was testing done to verify the VIF changes?
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #50
Update current FV status: no pending items for debugging.
GRL will provide FV report on 8/30.
(
an...@asus.corp-partner.google.com <an...@asus.corp-partner.google.com> #51
Felwinter got the TBT cert (EV/FV pass).
Please help close this issue if there's no other concern, thanks!
de...@google.com <de...@google.com> #52
Do we still need to modify VIF or it is handled by genvif already?
Master_port set to "true" for all ports
Product_Total_Source_Power_mW set to (number of USB-C ports * 7.5) + 7.5 W (ex. 22.5 W for a 2-port system)
Port_Source_Power_Type "shared" for all ports
Port_Source_Power_Gang set to some shared name for all the ports
Port_Source_Power_Gang_Max_Power set to the same as the Product_Total_Source_Power
dz...@google.com <dz...@google.com> #53
This is not yet handled for genvif. I believe other boards (ex. crota) have been putting this in their vif_override.xml files.
dz...@google.com <dz...@google.com> #54
From
Description
Context
Two PD 2.0 failed items were found in complience test which is required for TBT certification.
PD 2.0 failed items:
TDA2.3.1.1 -- C2 request 5V/3A but DUT reject.
TDA2.3.1.2 -- C2 request 5V/1.5A but DUT send Source capacity 5V/1.5(GRL expect 5V/3A meet VIF config)
Failure aanalysis, see
Implication:
These two PD2.0 failed items are now blocking TBT certification for Felwinter TBT sku. See
We need brya team's help to prioritize this issue since it may impact all the devices with TBT skus.
Thanks.