Status Update
Comments
va...@google.com <va...@google.com>
va...@google.com <va...@google.com> #2
m....@gmail.com <m....@gmail.com> #3
Hi,
Can you provide more information about:
- Steps to reproduce the issue.
- If possible can you provide a sample data for reproduction. Please remove PII if there are any.
If possible can you also provide a screenshot of the error?
Thanks
va...@google.com <va...@google.com> #4
{
"insertId": "63a0f443-0000-2ad1-bbc1-f403045f7a4e@a1",
"jsonPayload": {
"context": "CDC",
"event_code": "UNSUPPORTED_EVENTS_DISCARDED",
"read_method": "",
"message": "Discarded 1180 unsupported events for BigQuery destination: 880653332314.datastream_txns_public.adjustment_adjustmentmodel, with reason code: BIGQUERY_TOO_MANY_PRIMARY_KEYS, details: Failed to create the table in BigQuery, because the source table has too many primary keys.."
},
"resource": {
"type": "
"labels": {
"resource_container": "REDACTED",
"stream_id": "sandpit-txns-to-sandpit-bq1",
"location": "europe-west2"
}
},
"timestamp": "2022-10-04T12:56:20.846705Z",
"severity": "WARNING",
"logName": "projects/REDACTED/logs/
"receiveTimestamp": "2022-10-04T12:56:21.525370213Z"
},
m....@gmail.com <m....@gmail.com> #5
CREATE TABLE public.adjustment_adjustmentmodel (
created_date timestamptz NOT NULL,
modified_date timestamptz NOT NULL,
id uuid NOT NULL,
adjustment_type int4 NOT NULL,
adjusted_item_id uuid NOT NULL,
line_item_id uuid NOT NULL,
transaction_reference_id uuid NOT NULL,
promotion_id uuid NULL,
CONSTRAINT adjustment_adjustmentmodel_line_item_id_key UNIQUE (line_item_id),
CONSTRAINT adjustment_adjustmentmodel_pkey PRIMARY KEY (id)
);
va...@google.com <va...@google.com>
co...@monogroup.com <co...@monogroup.com> #6
n....@gmail.com <n....@gmail.com> #7
CASE WHEN pg_index.indisprimary IS NULL THEN $32 ELSE $33 END AS is_primary_key
but pg_index.indisprimary can be 't' or 'f', so just checking for NULL results in columns from non-primary key indexes to be flagged as a primary index. The line should change to something like:
CASE WHEN pg_index.indisprimary = 't' THEN $33 ELSE $32 END AS is_primary_key
This query also is the cause of another bug: 251216031
bh...@gmail.com <bh...@gmail.com> #8
[Deleted User] <[Deleted User]> #9
ke...@gmail.com <ke...@gmail.com> #10
ra...@gmail.com <ra...@gmail.com> #11
jo...@bqn.com.uy <jo...@bqn.com.uy> #12
Hi @mark.doutre,
I tried replicating your issue by following this
Prior to this I created a table in my CloudSQL Postgres database using the DDL you have provided. See schema [1].
I created mock data using the query below:
INSERT INTO public.adjustment_adjustmentmodel (created_date,modified_date,id,adjustment_type,adjusted_item_id,line_item_id,transaction_reference_id,promotion_id) values ('2022-09-15 19:00+11:00','2022-09-16 19:00+11:00',gen_random_uuid (),1,gen_random_uuid (),gen_random_uuid (),gen_random_uuid (),gen_random_uuid ());
Postgres Data is inserted successfully as seen in [2]. I proceeded with creating the profile for both Postgres and BigQuery as seen in the
Let me know if I missed anything on my reproduction steps so I can retry my replication based on the steps you have taken.
[1] adjustment_schema.png
[2] postgres_query_output.png
[3] created_stream.png
[4] bq_streamed_data.png
ca...@secondstoryrealty.com <ca...@secondstoryrealty.com> #13
When I view the schema in datastream, I see the attached.
ma...@knovik.com <ma...@knovik.com> #14
Hi ri...,
To replicate the issue you need to add indexes that also reference the primary key column. For example, creating this table and trying to make a stream with this table will fail with the BIGQUERY_TOO_MANY_PRIMARY_KEYS error, even though it clearly only has 1 primary key with a single column id
.
CREATE TABLE too_many_keys_failure (
id int,
created_date timestamp,
last_modified_date timestamp,
user_id int,
facility_id int,
manager_id int,
is_available bool,
CONSTRAINT id_pk PRIMARY KEY (id) --NOTE THAT THIS IS THE ONLY PRIMARY KEY!
);
--NOTE THE NON-PRIMARY KEY INDEXES
CREATE INDEX ON too_many_keys_failure (id, user_id, facility_id, manager_id, is_available, last_modified_date);
CREATE INDEX ON too_many_keys_failure (id, user_id, facility_id, manager_id, is_available);
CREATE INDEX ON too_many_keys_failure (id, user_id, facility_id, manager_id, last_modified_date);
CREATE INDEX ON too_many_keys_failure (id, user_id, facility_id, is_available, last_modified_date);
CREATE INDEX ON too_many_keys_failure (id, user_id, facility_id);
CREATE INDEX ON too_many_keys_failure (id, facility_id);
INSERT INTO too_many_keys_failure
VALUES
(1,current_timestamp, current_timestamp, '1','1','2',false),
(2,current_timestamp, current_timestamp, '2','1',null,false),
(3,current_timestamp, current_timestamp, '3','1',null,false);
This appears to be due to how Datastream attempts to detect primary keys in its query to PostgreSQL. I believe there is a bug in the query that was written/generated where it checks pg_index.indisprimary IS NULL
instead of pg_index.indisprimary = 't'
The screenshot attached shows the failure when creating a new stream for this the table public_too_many_keys_failure
. Note that the prefix of the schema public
is because I chose the "single dataset for all schemas option when setting up the stream.
I've also in the stream added a second table called public_too_this_one_works
that is identical to this table except without the non-primary key indexes. This one is shown in the screenshot and we can see that it wrote the 3 records.
ho...@roche.com <ho...@roche.com> #15
Hi @leon.verhelst,
Thank you for providing additional details. I will provide an update on my replication and findings.
ja...@nasserlaw.com <ja...@nasserlaw.com> #16
Hi,
I was able to replicate the issue. I reached out to the product team and created an internal issue about this. Please keep in mind that this issue has to be analyzed and considered by the product team and I can't provide you an ETA for it. However, you can keep track of the status by following this thread.
ra...@gmail.com <ra...@gmail.com> #18
Hi,
I would appreciate it if you can provide your insight on this. Assuming a unique key is a must for clustering to properly load data into BigQuery, would you expect data stream to randomly choose a unique index if one such index exists and use it as the primary key in BigQuery, or look for primary keys only, and fail if none exists?
Thanks
do...@gmail.com <do...@gmail.com> #19
I would rather see an option on the destination creation, where the use can specify how the data should be clustered or partitioned if required. For instance, in my use case I want to take transactional data from Postgres and load it into BQ for analytics purposed. The destination query workloads are going to be different from the source workloads, so it would be advantage for my usecase if I could cluster data, for example, on some userid etc to assist in analysis.
au...@infinit-o.com <au...@infinit-o.com> #20
I would expect BigQuery to respect the REPLICA IDENTITY
from the source tables and act similar to PostgreSQL's rules for setting up a publication:
From:
A published table must have a “replica identity” configured in order to be able to replicate UPDATE and DELETE operations, so that appropriate rows to update or delete can be identified on the subscriber side. By default, this is the primary key, if there is one. Another unique index (with certain additional requirements) can also be set to be the replica identity. If the table does not have any suitable key, then it can be set to replica identity “full”, which means the entire row becomes the key. This, however, is very inefficient and should only be used as a fallback if no other solution is possible. If a replica identity other than “full” is set on the publisher side, a replica identity comprising the same or fewer columns must also be set on the subscriber side. See REPLICA IDENTITY for details on how to set the replica identity. If a table without a replica identity is added to a publication that replicates UPDATE or DELETE operations then subsequent UPDATE or DELETE operations will cause an error on the publisher. INSERT operations can proceed regardless of any replica identity.
A Postgres -> BigQuery replication should use the REPLICA IDENTITY
that is set on the source table, which normally is set like so:
- Use the PK if exists
- Otherwise use a specified unique index as per the table definition
- Otherwise use the full row
For information on how to set the replica identity see:
Finding the replica identity for a table is done as described here:
lu...@keynest.com <lu...@keynest.com> #21
lr...@musselmanandhall.com <lr...@musselmanandhall.com> #22
do...@gmail.com <do...@gmail.com> #23
jo...@pwc.com <jo...@pwc.com> #24
ma...@gmail.com <ma...@gmail.com> #25
ma...@knovik.com <ma...@knovik.com> #26
A fix for this bug is currently being rolled-out, and should be applied to all Google Cloud regions by the end of the week (Oct. 29).
[Deleted User] <[Deleted User]> #27
lu...@keynest.com <lu...@keynest.com> #28
gu...@gmail.com <gu...@gmail.com> #29
Is this issue the same solution the following error message?
BIGQUERY_UNSUPPORTED_PRIMARY_KEY_CHANGE
mi...@airbus.com <mi...@airbus.com> #30
datastream because it does not allow to generate the partitioned table?
or am i doing something wrong?
jo...@gmail.com <jo...@gmail.com> #31
ad...@zalando.de <ad...@zalando.de> #32
In our case the table it is trying to copy over from Cloud SQL (MySQL) to BigQuery using the new Datastream feature does have 5 columns as its Primary Key. Is there a limit to the number of columns in a primary key for this to work? Not sure why this is a limitation..
- Error message details: Failed to create the table in BigQuery, because the source table has too many primary keys.."
ay...@gmail.com <ay...@gmail.com> #33
step by step to reproduce this:
1. start datastream from postgresql to bigquery
2. after the transfer done, all tables in bigquery, pause the job
3. partition one of the table
4. resume the job
5. the log says
{
"insertId": "640c1ed5-0000-20cd-8059-883d24fc7d54@a1",
"jsonPayload": {
"read_method": "",
"event_code": "UNSUPPORTED_EVENTS_DISCARDED",
"context": "CDC",
"message": "Discarded 25 unsupported events for BigQuery destination: DATASET_ID, with reason code: BIGQUERY_UNSUPPORTED_PRIMARY_KEY_CHANGE, details: Failed to write to BigQuery due to an unsupported primary key change: adding primary keys to existing tables is not supported.."
},
"resource": {
"type": "
"labels": {
"resource_container": "",
"location": "LOCATION",
"stream_id": "DATASET_ID"
}
},
"timestamp": "2022-11-16T04:40:05.318457Z",
"severity": "WARNING",
"logName": "projects/PROJECT_ID/logs/
"receiveTimestamp": "2022-11-16T04:40:06.332008985Z"
}
i checked the differences only lies on the partitioned table or not, the cluster is the same (using the id of that table).
when i changed back the destination table to not having partition, it works successfully
jj...@downloadtoolbox.com <jj...@downloadtoolbox.com> #34
{
insertId: "64443c8d-0000-2756-9db0-14c14ef32a9c@a1"
jsonPayload: {
context: "CDC"
event_code: "UNSUPPORTED_EVENTS_DISCARDED"
message: "Discarded 1677 unsupported events for BigQuery destination: [my table], with reason code: BIGQUERY_TOO_MANY_PRIMARY_KEYS, details: Failed to create the table in BigQuery, because the source table has too many primary keys.."
read_method: ""
}
logName: "projects/PROJECT_ID/logs/
receiveTimestamp: "2022-11-19T00:51:48.226399021Z"
resource: {2}
severity: "WARNING"
timestamp: "2022-11-19T00:51:48.177058Z"
}
Create table statement from source Postgres Cloud SQL:
create table myschema.mytable
(
company_id bigint not null,
region_id integer not null,
day date not null,
sales numeric not null,
hits numeric not null,
constraint mytable_uniq
unique (company_id, region_id, day)
);
y....@manitou-group.com <y....@manitou-group.com> #35
Apparently there's been some regression to this issue... an updated fix is pending, and will be rolled out ASAP.
I'll update here again once the fix is in production.
ka...@letsmoveonline.tech <ka...@letsmoveonline.tech> #36
Hi Team, - Seems like I am as well having the same issue as #33 (This issues is blocking for me in production )Please let us know the status in this .
I am getting error when I was trying to partition the destination table in BigQuery while working with DataStream.
step by step to reproduce this:
1. start DataStream from CloudSQL(MYSQL) to BigQuery
2. once the Stream Completed all tables in BigQuery, pause the job
3. Partition one of the table
4. Resume the job
5. Getting error log as below
====================================================
Discarded 97 unsupported events for BigQuery destination: 833537404433.Test_Membership_1.internal_Membership, with reason code: BIGQUERY_UNSUPPORTED_PRIMARY_KEY_CHANGE, details: Failed to write to BigQuery due to an unsupported primary key change: adding primary keys to existing tables is not supported..
{
insertId: "65ad79ec-0000-24c7-a66e-14223bbf970a@a1"
jsonPayload: {
context: "CDC"
event_code: "UNSUPPORTED_EVENTS_DISCARDED"
message: "Discarded 97 unsupported events for BigQuery destination: 833537404433.Test_Membership_1.internal_Membership, with reason code: BIGQUERY_UNSUPPORTED_PRIMARY_KEY_CHANGE, details: Failed to write to BigQuery due to an unsupported primary key change: adding primary keys to existing tables is not supported.."
read_method: ""
}
logName: "projects/gcp-everwash-wh-dw/logs/
receiveTimestamp: "2022-11-22T22:08:38.620495835Z"
resource: {2}
severity: "WARNING"
timestamp: "2022-11-22T22:08:37.726075Z"
}
---------------------------------------------------------------
What you expected to happen: ?
I am expecting to create Partition for certain tables that are getting inserted in BigQuery via DataStream.
Attaching Screenshot for reference--
dm...@usc.edu <dm...@usc.edu> #37
For Postgres/BQ pairing, what are the steps needed to confirm this fix works? Will a running stream with a broken source table self-correct with the new code? A standard cleanup procedure would be very helpful.
- Does the table in question need to be removed (unchecked, saved) and added again in the source configuration?
- Does the stream need to be stopped (paused) and restarted? Deleted and recreated to pickup the new code?
- Does the destination table need to be deleted in BQ?
pu...@uptodatewebdesign.com <pu...@uptodatewebdesign.com> #38
The fix has been rolled out.\
To recover from this error:
- If a table was already created in BigQuery it should be manually deleted
- Trigger a backfill for the table in Datastream
[Deleted User] <[Deleted User]> #39
BigQuery Product Manager here. It looks like the request here is to add partitioning to an existing BigQuery table. Unfortunately that's not supported. You have to add partitioning to a net-new table. Technically you can create a newly partitioned table from the result of a query [1], however this approach won't work for existing Datastream sourced tables since there wouldn't be a _CHANGE_SEQUENCE_NUMBER field which is required to correctly apply UPSERT operations in the correct order. So the only option would be to pre-create the table with partitioning/clustering/primary keys before starting the Datastream stream like the below DDL SQL query example [2].
One thing to note however is that today partitioning may not be as effective to reduce the data scanned when performing background CDC apply operations because the background merges could be an UPSERT against any record within the base table and partitioning pruning isn't propagated to the background operation. However it is worth noting that clustering should still be beneficial because clustering is used (the PK fields are also denoted as the clustered fields).
[1]
[2] CREATE TABLE `project.dataset.new_table`
(
`Primary_key_field` INTEGER PRIMARY KEY NOT ENFORCED,
`time_field` TIMESTAMP,
`field1` STRING,
#Just an example above. Add needed fields within the base table...
)
PARTITION BY
DATE(time_field)
CLUSTER BY
Primary_key_field #This must be an exact match of the specified primary key fields
OPTIONS(max_staleness = INTERVAL 15 MINUTE) #or whatever the desired max_staleness value is
he...@pedro.plus <he...@pedro.plus> #40
[Deleted User] <[Deleted User]> #41
mc...@texasmedicalcareplans.com <mc...@texasmedicalcareplans.com> #42
@johan.eliasson - does the table actually have more than 4 PKs? If not, can you share the CREATE TABLE statement (you can email it to me directly instead of posting it here)?
If the table has more than 4 PK columns, then this error is currently the expected behavior, but there's a change coming to BQ which will allow more than 4 columns in the PK. I'm not able to share exact timelines for this change, but it's WIP (perhaps @nickorlove can provide more details).
lu...@keynest.com <lu...@keynest.com> #43
This is a public thread, so I'll refrain from providing exact timelines, however please note the limit of 4 PKs is a known issue we are working hard to address.
I'll update this thread once more concrete details can be shared with the broader community.
[Deleted User] <[Deleted User]> #44
I encoutered this issue as well, my DDL is:
CREATE TABLE public.spoon_mst ( spoon_code varchar(32) NOT NULL, qr_code varchar(32) NULL, scan_date timestamp(6) NULL, product_name varchar(128) NULL, weight varchar(16) NULL, mfg_date timestamp(6) NULL, exp_date timestamp(6) NULL, is_active bool NOT NULL DEFAULT true, status varchar(255) NULL DEFAULT 'UNUSED'::character varying, code_length int4 NULL, created_date timestamp NULL DEFAULT now(), description varchar(255) NULL, is_check bool NULL DEFAULT false, updated_date timestamp NULL, ext_id varchar(255) NULL, qr_manufacture_date timestamp NULL, is_synced bool NULL DEFAULT true, "version" int4 NOT NULL DEFAULT 0, CONSTRAINT spoon_mst_pkey PRIMARY KEY (spoon_code) );
My primary key was a random string generated from an algorithm. In case I try to sync other tables with primary key in number format (id), it works well. Is using primary key in string format causes this issue?
al...@attentio.org.pl <al...@attentio.org.pl> #45
It looks like the issue with your DDL is around syntax and that the primary key does not match the table's clustering key.An example DDL to create a table to be used with Datastream would be like this:
CREATE TABLE customers ( ID INT64 PRIMARY KEY NOT ENFORCED, NAME STRING, SALARY INT64) CLUSTER BY ID;
lo...@me.com <lo...@me.com> #46
FYI my earlier comment of a suggested DDL was in the frame of mind of running a DDL within BigQuery to create a BQ table which would be used as the destination for Datastream replication.
If your question was more about the syntax of running a DDL from the source database, please ignore
au...@clarewellclinics.co.uk <au...@clarewellclinics.co.uk> #47
Thank you so much, it worked like a charm after I created Big Query table manually with clustering key then starting a new stream.
gi...@reevolutiva.com <gi...@reevolutiva.com> #48
np...@paulinocontadores.com.ar <np...@paulinocontadores.com.ar> #49
This is now fixed. BigQuery have increased the limitation to 16 PK columns, and Datastream now aligns to this new limitation.
BigQuery still doesn't support more than four clustering columns, so when replicating a table with more than four primary key columns, Datastream uses four primary key columns as the clustering columns.
ki...@kdev.pl <ki...@kdev.pl> #50
er...@publipresse.fr <er...@publipresse.fr> #51
at...@pepkorit.com <at...@pepkorit.com> #52
of...@segments.at <of...@segments.at> #53
ll...@oningroup.com <ll...@oningroup.com> #54
Edit: To be clear, we need all tasks to be available via API. It's not just the Chat tasks that are missing! It's also tasks created for Google Docs, etc.
ra...@gmail.com <ra...@gmail.com> #55
an...@andamp.io <an...@andamp.io> #56
cy...@plutus-research.com <cy...@plutus-research.com> #57
as...@legacy.learnplatform.com <as...@legacy.learnplatform.com> #58
me...@legacy.learnplatform.com <me...@legacy.learnplatform.com> #59
kr...@pwc.com <kr...@pwc.com> #60
[Deleted User] <[Deleted User]> #61
I've been relatively new users to Google Workspace (before usually worked on Ms Office or dedicated ERP systems) and I enjoyed it a lot for its availability and simplicity. However, I knew google tasks are not ideal but enough for simple project or task management. I thought it is also effective (providing its integration with some others apps, like todoist). And it's only when I digged in into this subject I realized that there's a problem with tasks created from Google chat or Google Docs. In the and, not solving this issue puts into questionmark the whole idea of using google tasks. What for if only some of them can be automated.
na...@gmail.com <na...@gmail.com> #62
I like Google Chat. However, Slack and Teams are more productive due to the lack of this feature.
ma...@toho.ne.jp <ma...@toho.ne.jp> #63
ma...@venturelabs.team <ma...@venturelabs.team> #64
ma...@soumirantes.com.br <ma...@soumirantes.com.br> #65
da...@valuegraphics.com <da...@valuegraphics.com> #66
cy...@gmail.com <cy...@gmail.com> #67
en...@angiodroid.com <en...@angiodroid.com> #68
ro...@desidara.com <ro...@desidara.com> #69
At least RESPOND to people, Google. This is starting to look more like arrogance than anything else.
be...@visible.tech <be...@visible.tech> #70
j....@i-t-c.fr <j....@i-t-c.fr> #71
Can you give us an ETA for this API Improvement ? Thanks.
an...@jarcredit.com <an...@jarcredit.com> #72
[Deleted User] <[Deleted User]> #73
ms...@prisma.com <ms...@prisma.com> #74
jo...@gmail.com <jo...@gmail.com> #75
j....@i-t-c.fr <j....@i-t-c.fr> #76
--
*Tél :* 01 85 76 39 37*Visitez notre nouveau site internet !
go...@ergonperu.com <go...@ergonperu.com> #77
ll...@oningroup.com <ll...@oningroup.com> #78
be...@gmail.com <be...@gmail.com> #79
jw...@gmail.com <jw...@gmail.com> #80
ol...@liber.com.au <ol...@liber.com.au> #81
ad...@nairobidesignweek.com <ad...@nairobidesignweek.com> #82
ma...@pointofrental.com <ma...@pointofrental.com> #83
It is a massive inefficiency to have a list of tasks to organise yourself and you can't group them by importance easily as you have to scroll. This is where third party products like TasksBoard come in; where they allow you to view and organise your Google tasks in a much more readable format.
This however is also unusable as any tasks created and assigned in a Google Doc are not visible in Tasksboard due to this issue that has been around for over 2 years.
ki...@kdev.pl <ki...@kdev.pl> #84
There is no any responsible person for this issue... it's sad.
gr...@gmail.com <gr...@gmail.com> #85
ca...@gmail.com <ca...@gmail.com> #86
jb...@gmail.com <jb...@gmail.com> #87
ma...@kemmer.team <ma...@kemmer.team> #88
as...@instructure.com <as...@instructure.com> #89
bl...@cedargroveleedsmedia.org <bl...@cedargroveleedsmedia.org> #90
je...@uneedle.com <je...@uneedle.com> #91
be...@gmail.com <be...@gmail.com> #92
ch...@reducemyinsurance.net <ch...@reducemyinsurance.net> #93
ji...@unitelmasapienza.it <ji...@unitelmasapienza.it> #94
jo...@hallo-nomina.de <jo...@hallo-nomina.de> #95
ma...@dkpdesigns.com <ma...@dkpdesigns.com> #96
+1billion
le...@zebrapig.com <le...@zebrapig.com> #97
an...@amalgerol.com <an...@amalgerol.com> #98
jo...@gmail.com <jo...@gmail.com> #99
ch...@gmail.com <ch...@gmail.com> #100
Need this feature
al...@yoxel.com <al...@yoxel.com> #101
We need this feature too!
[Deleted User] <[Deleted User]> #102
bm...@smglobalgroup.com <bm...@smglobalgroup.com> #103
yg...@wayfair.com <yg...@wayfair.com> #104
ch...@googlemail.com <ch...@googlemail.com> #105
ra...@passenger-clothing.com <ra...@passenger-clothing.com> #106
jb...@gmail.com <jb...@gmail.com> #107
qu...@airbus.com <qu...@airbus.com> #108
ww...@smarttokenlabs.com <ww...@smarttokenlabs.com> #109
pk...@arista.com <pk...@arista.com> #110
fa...@gmail.com <fa...@gmail.com> #111
[Deleted User] <[Deleted User]> #112
[Deleted User] <[Deleted User]> #113
jp...@google.com <jp...@google.com>
ro...@canva.com <ro...@canva.com> #114
sa...@hotmail.com <sa...@hotmail.com> #115
my...@gmail.com <my...@gmail.com> #116
ya...@gmail.com <ya...@gmail.com> #117
je...@uptodatewebdesign.com <je...@uptodatewebdesign.com> #118
me...@corrdyn.com <me...@corrdyn.com> #119
jo...@hallo-nomina.de <jo...@hallo-nomina.de> #120
ru...@kaffa.no <ru...@kaffa.no> #121
a2...@gmail.com <a2...@gmail.com> #122
ki...@gmail.com <ki...@gmail.com> #123
ef...@wefitgroup.com <ef...@wefitgroup.com> #124
ch...@blueprinteducation.org <ch...@blueprinteducation.org> #125
This is a must! A bot/chat app being able to interact with a space is great but tasks is such a core feature to a Google Space and needs to be a part of that API somehow!!!
ma...@kissflow.com <ma...@kissflow.com> #126
su...@gmail.com <su...@gmail.com> #127
jo...@fenwig.co.uk <jo...@fenwig.co.uk> #128
ca...@matc.edu <ca...@matc.edu> #129
[Deleted User] <[Deleted User]> #130
vr...@salala.de <vr...@salala.de> #131
je...@uptodatewebdesign.com <je...@uptodatewebdesign.com> #132
pd...@mese.gr <pd...@mese.gr> #133
ph...@imlakeshorganics.com <ph...@imlakeshorganics.com> #134
I use Zapier and would like to see their integration expanded once a new API is released.
ee...@clubfair.org <ee...@clubfair.org> #135
li...@telling.eu <li...@telling.eu> #136
However, when the API is not completed, management will not have an overview across projects. When an entire management team is unable to combine tasks into one overview, it can have several significant business impacts. Here are some potential consequences:
1. Lack of Coordination and Collaboration:
Without a consolidated overview, different teams and departments may be working in silos, leading to a lack of coordination and collaboration.
Duplication of efforts can occur, as teams may not be aware of each other's tasks and activities.
2. Inefficiency:
The absence of a unified task overview can result in inefficiencies, with managers and employees spending more time trying to gather information and coordinate efforts.
Delays in decision-making and project timelines can occur, negatively impacting overall productivity.
3. Misaligned Priorities:
Without a clear overview, there is a risk of misaligned priorities among different teams or departments.
The organization may struggle to focus on critical tasks and objectives, leading to wasted resources and missed opportunities.
4. Poor Communication:
A lack of task integration can result in poor communication flow within the organization.
Important information may not be communicated effectively, leading to misunderstandings and potential errors in execution.
5. Increased Risk of Errors:
When tasks are not consolidated into one overview, there is a higher likelihood of errors and mistakes.
Critical details may be overlooked, and the lack of visibility into the overall picture can lead to suboptimal decision-making.
6. Difficulty in Monitoring Progress:
Monitoring the progress of various tasks becomes challenging without a centralized overview.
Managers may struggle to assess the status of projects, identify bottlenecks, and take corrective actions in a timely manner.
7. Impact on Employee Morale:
Employees may feel frustrated and demotivated when they perceive a lack of organization and coordination.
A disjointed approach to task management can contribute to a negative work environment and impact employee morale.
8. Customer Satisfaction:
If tasks related to customer service, product development, or other customer-facing activities are not well-coordinated, it can result in poor customer experiences.
This, in turn, can affect customer satisfaction and loyalty.
9. Competitive Disadvantage:
In today's fast-paced business environment, organizations need to be agile and responsive. A lack of task integration can make it difficult to adapt quickly to changing market conditions. Competitors with more streamlined operations may gain a competitive advantage.
To address these issues, Google should invest in fixing this API, to ensure that tasks are effectively combined into a cohesive overview for the entire management team, aka people who work across projects.
de...@everlastbrands.com <de...@everlastbrands.com> #137
be...@plugable.com <be...@plugable.com> #138
da...@quarksoft.com <da...@quarksoft.com> #139
le...@maasi.eu <le...@maasi.eu> #140
This integration is NEEDED. As well as many others (such as reminders to be visible in gmail's calendar add-on, or documents approvals to be made available via APIs.... c'mon workspace could be THE answer, you're loosing grip!)
I give you 2 months more, or I'll start migrating my companies to Microsoft. Enough is enough
da...@innoarea.com <da...@innoarea.com> #141
js...@gmail.com <js...@gmail.com> #142
ra...@azimutbenetti.com <ra...@azimutbenetti.com> #143
Pa...@sensenumbers.com <Pa...@sensenumbers.com> #144
tasks in spaces are useless
tasks in docs are half way useless beacuse there is no overview, no sharing ...
:-(
rk...@watech.cz <rk...@watech.cz> #145
ph...@villbrygg.com <ph...@villbrygg.com> #146
na...@hanabitech.com <na...@hanabitech.com> #147
ul...@larian.com <ul...@larian.com> #148
sh...@mylocalchemist.co.uk <sh...@mylocalchemist.co.uk> #149
ta...@appgenix-software.com <ta...@appgenix-software.com> #150
mu...@codeforpakistan.org <mu...@codeforpakistan.org> #151
hu...@gmail.com <hu...@gmail.com> #152
jo...@hallo-nomina.de <jo...@hallo-nomina.de> #153
an...@sofist.co <an...@sofist.co> #154
ho...@gmail.com <ho...@gmail.com> #155
al...@getabearhug.com <al...@getabearhug.com> #156
su...@scanifly.com <su...@scanifly.com> #157
ja...@localmetric.es <ja...@localmetric.es> #158
se...@inovasense.pt <se...@inovasense.pt> #159
mi...@gmail.com <mi...@gmail.com> #160
di...@gmail.com <di...@gmail.com> #161
+1
ja...@wccusd.net <ja...@wccusd.net> #162
ma...@mgsq.it <ma...@mgsq.it> #163
Da...@neo.com.au <Da...@neo.com.au> #164
fa...@uber.com <fa...@uber.com> #165
ma...@intentio.co.za <ma...@intentio.co.za> #166
fe...@elate.xyz <fe...@elate.xyz> #167
jp...@google.com <jp...@google.com> #168
I am marking this issue as FIXED
as the internally reported issue has been marked as FIXED
. Please note that there may be a delay in rolling this out to production. Thank you for your patience and please comment here if the issue remains.
ma...@intentio.co.za <ma...@intentio.co.za> #169
jp...@google.com <jp...@google.com> #170
Should already be available:
di...@gmail.com <di...@gmail.com> #171
Looking at the Tasks API module on GCP it would seem it hasn't been updated since last year (please see attached screenshot)
The assignmentInfo object is also not pulling through, which would lead me to believe that it is still being rolled out in phases or not yet available. Could you please advise on this?
Thanks in advance.
st...@steegle.com <st...@steegle.com> #172
In reply to #171 the only way I could get the API explorer to show me tasks from a space was to enable the showAssigned and showHidden options on the list action - I hope this helps.
di...@gmail.com <di...@gmail.com> #173
ja...@nasserlaw.com <ja...@nasserlaw.com> #174
"Output only. Information about the Chat Space where this task originates from. This field is read-only."
jp...@google.com <jp...@google.com> #175
See the feature request in
pa...@reclaim.ai <pa...@reclaim.ai> #176
This issue does not seem to be resolved for me or any of our customers.
h-...@biglobe.co.jp <h-...@biglobe.co.jp> #177
I was able to retrieve tasks assigned in Google Chat spaces using Google Apps Script!
The key was enabling the showAssigned option when calling the Google Tasks API in the Advanced Service.
By running const tasks = Tasks.Tasks.list(taskListId, { showAssigned: true });
, I got it working.
[Deleted User] <[Deleted User]> #178
The key was enabling the showAssigned option when calling the Google Tasks API in the Advanced Service. By running const tasks = Tasks.Tasks.list(taskListId, { showAssigned: true });
Description
If you create and assign a task from a Google Chat room, it will appear on the default list on the official Google Tasks client but can't get fetched with the API.
Many thanks