Status Update
Comments
su...@google.com <su...@google.com> #2
ge...@fivetran.com <ge...@fivetran.com> #3
Steps to reproduce
What steps are needed to reproduce this issue?
Frequency
How frequently does this issue occur? (e.g 100% of the time, 10% of the time)
Expected output
What is the expected output?
Current output
What is the current output?
Android bug report
After reproducing the issue, press the volume up, volume down, and power button simultaneously. This will capture a bug report on your device in the “bug reports” directory. Attach the bug report file to this issue.
Alternate method:
After reproducing the issue, navigate to developer settings, ensure ‘USB debugging’ is enabled, then enable ‘Bug report shortcut’. To take bug report, hold the power button and select the ‘Take bug report’ option.
og...@gmail.com <og...@gmail.com> #4
ha...@google.com <ha...@google.com>
ke...@gmail.com <ke...@gmail.com> #5
jj...@google.com <jj...@google.com>
ep...@google.com <ep...@google.com>
qa...@gmail.com <qa...@gmail.com> #6
Steps to reproduce
Configure a L2TP / IPsec PSK VPN using the native VPN client on Android 8 and try to connect
Frequency
Issue occurs 100% of the time.
Expected output
Connected to VPN
Current output
Message that connection failed.
Router log (IP-ADRES = the IP address of the Android)
2017-10-20 13:12:21 [IPSEC/IKE][Local][66:-][@IP-ADRES] may be security method/sbunet setting/GRE setting unmatched?
2017-10-20 13:12:21 [IPSEC/IKE][Local][66:-][@IP-ADRES] state transition fail: STATE_QUICK_R0
2017-10-20 13:12:19 sent MR3, ISAKMP SA established with IP-ADRES. In/Out Index: 66/0
2017-10-20 13:12:19 Matching General Setup key for dynamic ip client...
2017-10-20 13:12:19 NAT-Traversal: Using RFC 3947, no NAT detected
2017-10-20 13:12:19 Accept Phase1 prorosals : ENCR OAKLEY_AES_CBC, HASH OAKLEY_SHA
2017-10-20 13:12:19 Matching General Setup key for dynamic ip client...
2017-10-20 13:12:19 Find Phase1 proposal: SHA2_256
2017-10-20 13:12:19 Matching General Setup key for dynamic ip client...
2017-10-20 13:12:19 Find Phase1 proposal: SHA2_256
2017-10-20 13:12:19 Matching General Setup key for dynamic ip client...
2017-10-20 13:12:19 Matching General Setup key for dynamic ip client...
2017-10-20 13:12:19 Responding to Main Mode from IP-ADRES
Thanks for looking into this and good luck!
su...@google.com <su...@google.com> #7
1. Setup VPN connection using IPSec Xauth PSK with IPSec key
2. Connect to VPN server
3. Connection attempt will fail
Frequency
100%
Expected output
Connect to VPN server successfully
Current output
Connection attempt fails
ma...@gmail.com <ma...@gmail.com> #8
As soon as I actvate "permanent" it cant connect... like in 6.x and 7.x (now 8.0.0)
ge...@fivetran.com <ge...@fivetran.com> #9
yx...@google.com <yx...@google.com> #10
Can you please check whether issue is reproducible with latest Android 8.1.0 build(OPM1.1171019.011) for Pixel device?
If issue is still reproducible, please capture a bugreport and share it here.
ge...@fivetran.com <ge...@fivetran.com> #11
yx...@google.com <yx...@google.com> #12
[Deleted User] <[Deleted User]> #13
I'm curious if these are related or if my case is unique. If possible run 'ifconfig' on the device while trying to connect and compare the MTU of the ppp0 interface with that of your main wifi/LTE interface
yx...@google.com <yx...@google.com> #14
[Deleted User] <[Deleted User]> #15
Should be ...
MERGE dest T
USING (SELECT * FROM source CROSS JOIN UNNEST([true, false]) is_insert) S
ON T.id = S.id AND NOT is_insert
WHEN MATCHED THEN
DELETE
WHEN NOT MATCHED AND is_insert THEN
INSERT(...) VALUES(...)
ya...@gmail.com <ya...@gmail.com> #16
When so, it would be an effective (cost+time) method to "replace" old records based on a key (single or multicolumn) and optionally a time-/datestamp.
Our use case: we receive and load data in 200 tables, many being soft-dupes, that is, same key but different versions based on a last-updated date.
Today we load all in target and dedupe with window function ROW_NUMBER() OVER PARTITION BY <key> ORDER BY last-updated DESC, just to keep the latest version.
This means each time a tablescan of all columns. A MERGE would only read the key in the target.
yx...@google.com <yx...@google.com> #17
ya...@gmail.com <ya...@gmail.com> #18
yx...@google.com <yx...@google.com> #19
ch...@gmail.com <ch...@gmail.com> #20
Will something like this be supported?
MERGE `targetdb.cool_table` T
USING (SELECT * FROM `sourcedb.source149837843989`) S
ON T.id = S.id
WHEN MATCHED THEN
DELETE; INSERT ROW
WHEN NOT MATCHED THEN
INSERT ROW
...I guess if there is a different number of columns it will always be an error? so maybe this isn't the right solution for me :/ back to the drawing board
su...@google.com <su...@google.com>
yx...@google.com <yx...@google.com> #21
mp...@google.com <mp...@google.com>
bw...@google.com <bw...@google.com>
ni...@google.com <ni...@google.com>
se...@kfish.com <se...@kfish.com> #22
ni...@google.com <ni...@google.com> #23
Yes, we are making good progress on this feature request and hope to share some updates soon.
For the time being, if your use case is to replicate changes from an OLTP database source, have you looked into the Datastream for BigQuery preview.[1]?
[1]
[Deleted User] <[Deleted User]> #24
Hi, is it currently possible to update all values of row without having to write out every column? It gets pretty tedious to list out all columns after UPDATE SET, especially if I want to run this for multiple tables.
For INSERT it is possible to use ROW
, but couldn't find the syntax to do this with UPDATE.
MERGE `target` T
USING `source` S
ON T.userId = S.userId
WHEN MATCHED AND (condition) THEN
UPDATE SET
col1=S.col1,
col2=S.col2,
col3=S.col3,
col4=S.col4,
col5=S.col5,
col6=S.col6
WHEN NOT MATCHED THEN
INSERT ROW
In reality I have >20 columns and had to write a Python script to generate the SQL code.
dml_query = """MERGE `target` T
USING `source` S
ON T.userId = S.userId
WHEN MATCHED AND (condition) THEN
UPDATE SET
"""
for col in ["col1", "col2", "col3", "col4", "col5", "col6"]:
dml_query += f" {col}=S.{col},\n"
# remove last ,
dml_query = dml_query[:-2]
dml_query += """\nWHEN NOT MATCHED THEN
INSERT ROW"""
print(dml_query)
Is it already possible, or we have to wait for this feature to be implemented as mentioned in #23?
ni...@google.com <ni...@google.com> #25
Hey Stef, Unfortunately overwriting all rows within an UPDATE statement without explicitly listing them isn't possible. As such, your approach seems to be the best option today, especially if you have tables which occasionally change schema.
ni...@google.com <ni...@google.com> #26
FYI the BigQuery change data capture (CDC) feature has now launched into public preview. You can learn more about it with these resources:
Description
1. The table has a "primary key" column (even if it isn't labeled as such) called "id".
2. If a new row has the same primary key as an existing row, it should replace the existing row.
3. Otherwise, it should append to the table.
Typically this is implemented as a transaction:
BEGIN TRANSACTION;
DELETE FROM my_table WHERE id IN (SELECT id FROM staging);
INSERT INTO my_table (SELECT * FROM staging);
END;
BigQuery doesn't support transactions, and I expect that would be a pretty big feature request! But all we really need is the ability to perform a DELETE followed by an INSERT as one atomic operation. For example, suppose BQs SQL grammar were extended to allow this as a single statement:
DELETE my_table WHERE id IN (SELECT id FROM STAGING)
INSERT (SELECT * FROM staging);
This would be sufficient for 99% of ETL use cases.