Assigned
Status Update
Comments
[Deleted User] <[Deleted User]> #2
Currently we have to jump through hoops to work around the lack of table decorators in standard SQL. It would be great to have this feature in standard SQL as well.
mo...@google.com <mo...@google.com> #3
Work is under way to support equivalent of point in time table decorator in standard SQL
[Deleted User] <[Deleted User]> #4
Great to hear that! We use range based table decorators to copy into temporary tables before aggregating.
mo...@google.com <mo...@google.com> #5
Range based decorators are more problematic than point in time, and are not targeted for standard SQL at this time.
re...@gmail.com <re...@gmail.com> #7
So by standard SQL you mean the Temporal extensions added to SQL a while back? or some language construct specific to BigQuery?
Thanks!
Thanks!
mo...@google.com <mo...@google.com> #8
I particularly meant "FOR SYSTEM TIME AS OF" clause.
[Deleted User] <[Deleted User]> #9
By "standard SQL", I meant https://cloud.google.com/bigquery/docs/reference/standard-sql/ -- I'm interested in having the feature of "range based table decorators" https://cloud.google.com/bigquery/table-decorators in the newer "standard SQL" grammar.
"FOR SYSTEM TIME AS OF" is not documented there. DB2 supports that clause; in that context I'm asking for "FOR SYSTEM TIME FROM ? TO ?"
Thanks.
"FOR SYSTEM TIME AS OF" is not documented there. DB2 supports that clause; in that context I'm asking for "FOR SYSTEM TIME FROM ? TO ?"
Thanks.
gq...@brightcove.com <gq...@brightcove.com> #10
You will not be able to EOL legacy SQL without porting this functionality to standard SQL. Without it, many near-real-time applications become prohibitively expensive, making BQ a non-starter for any production systems going forward.
mo...@google.com <mo...@google.com> #11
To clarify: The plan of record is to support functionality equivalent to legacy SQL's table decorators for point in time snapshot by using ANSI SQL clause "FOR SYSTEM TIME AS OF". It is not yet implemented - hence it is not in the https://cloud.google.com/bigquery/docs/reference/standard-sql/ , and hence we keep this issue open to track it.
gq...@brightcove.com <gq...@brightcove.com> #12
To clarify what is a breaking change for us, we need to be able to pull records FOR SYSTEM TIME SINCE, NOT SYSTEM TIME AS OF (The complement).
Using FOR SYSTEM TIME SINCE result in an ever increasing number of rows being touched as the table grows, making it impossible to run a query on the table economically. If we had FOR SYSTEM TIME SINCE; which is equivalent to the tail, we could harvest rows from the tail of the table in real time. This is what allows us to generate time series statistics for our application. Table decorators were a crucial feature that drove our decision to use Bigquery for our applications. Without it (and I mean the full functionality of decorators), we do not have a viable solution and will have to look elsewhere for supporting technology. This is a deal breaker for us.
Using FOR SYSTEM TIME SINCE result in an ever increasing number of rows being touched as the table grows, making it impossible to run a query on the table economically. If we had FOR SYSTEM TIME SINCE; which is equivalent to the tail, we could harvest rows from the tail of the table in real time. This is what allows us to generate time series statistics for our application. Table decorators were a crucial feature that drove our decision to use Bigquery for our applications. Without it (and I mean the full functionality of decorators), we do not have a viable solution and will have to look elsewhere for supporting technology. This is a deal breaker for us.
gq...@brightcove.com <gq...@brightcove.com> #13
Correction to above statement: Using FOR SYSTEM TIME AS OF results in an ever increasing number...
[Deleted User] <[Deleted User]> #14
I hate to add a +1, but it sounds like Bluecore has a very similar use case for BigQuery as Brightcove: We periodically run queries over the most recent X minutes/hours of data that were streamed into a table. This has been extremely effective in simplifying our application, and reducing our costs. If we had an equivalent, we probably would have moved to Standard SQL already.
di...@monzo.com <di...@monzo.com> #15
Same for as at Monzo Bank as well... +1 to previous two comments.
[Deleted User] <[Deleted User]> #16
Snapshot and Range Decorators a MUST BE Feature. What exact syntax will be - not that important. But feature itself is a must!
[Deleted User] <[Deleted User]> #17
+1 for table decorators in Standard SQL. Both snapshot and range are needed!
[Deleted User] <[Deleted User]> #18
+1 for table decorator for std sql
[Deleted User] <[Deleted User]> #19
+10 for this feature as well
[Deleted User] <[Deleted User]> #20
+1
[Deleted User] <[Deleted User]> #21
+1
r....@gmail.com <r....@gmail.com> #22
+1
ku...@xiatech.co.uk <ku...@xiatech.co.uk> #23
+1 - any update on this?
[Deleted User] <[Deleted User]> #24
+1
mo...@google.com <mo...@google.com> #25
Support for "point in time/snapshot" decorators in Standard SQL is being tested in testing environments. Approximate ETA release into production within this quarter.
[Deleted User] <[Deleted User]> #26
+1 Great news, can't wait
[Deleted User] <[Deleted User]> #27
+1
bh...@motorola.com <bh...@motorola.com> #28
+1
bh...@motorola.com <bh...@motorola.com> #29
Please release this ASAP into prod
[Deleted User] <[Deleted User]> #30
+1
[Deleted User] <[Deleted User]> #31
+1
[Deleted User] <[Deleted User]> #32
+1
[Deleted User] <[Deleted User]> #33
+1
da...@geotab.com <da...@geotab.com> #34
+1
bl...@domainmigrate.com <bl...@domainmigrate.com> #35
+1
[Deleted User] <[Deleted User]> #36
+1
[Deleted User] <[Deleted User]> #37
+1
fe...@lindenlab.com <fe...@lindenlab.com> #38
+1 (same requirement as stated above for near realtime reporting)
[Deleted User] <[Deleted User]> #39
+1
[Deleted User] <[Deleted User]> #40
+1
bo...@qubit.com <bo...@qubit.com> #41
+1
[Deleted User] <[Deleted User]> #42
+1 on this. Any new update since the good news that support for snapshot decorators is in testing?
fe...@lindenlab.com <fe...@lindenlab.com> #43
+1 specifically on feature pull records FOR SYSTEM TIME SINCE using std sql for near real-time reporting.
mo...@google.com <mo...@google.com> #44
This Feature Request only deals with "FOR SYSTEM TIME AS OF" for point-in-time snapshot. SQL Standard defines "FOR SYSTEM TIME BETWEEN" as construct that will return _all_ records that were valid during given interval, not just the newly added ones, so it is not suitable for that purpose.
fe...@lindenlab.com <fe...@lindenlab.com> #45
Got it. Do you have another bug-tracker link for requesting the feature to pull records FOR SYSTEM TIME SINCE using std sql (similar to the way it can be done is legacy sql)?
ak...@gmail.com <ak...@gmail.com> #46
+1 ... Anymore recent update here?
bi...@insparx.com <bi...@insparx.com> #47
+1
[Deleted User] <[Deleted User]> #48
Hey bi@inspark.com, I saw you edited your comment, do you have a relevant update to your situation, which I found concerning ?
ax...@insparx.com <ax...@insparx.com> #49
It was inappropriate, since BigQuery only guarantees decorators to work for 7 days past. Longer times would be convenient, but that is not part of this topic.
mo...@google.com <mo...@google.com> #50
For everybody following along: The feature has rolled into production, but is not yet officially supported (there is no documentation or official release announcement). You can still try it out using following syntax:
SELECT ... FROM Table FOR SYSTEM TIME AS OF <timestamp_expression>
SELECT ... FROM Table FOR SYSTEM TIME AS OF <timestamp_expression>
bl...@tableau.com <bl...@tableau.com> #51
Awesome news! Thanks for letting us know.
di...@monzo.com <di...@monzo.com> #52
Is it possible to select the last hour worth of data with it?
mo...@google.com <mo...@google.com> #53
It is possible to get data "as of an hour ago", but not "data that was added in the last hour".
[Deleted User] <[Deleted User]> #54
Hmm, what is difference between "as of an hour ago" and "data that was added in the last hour"?
I could imagine that "added" refers to the insert time, but which time is used for "as of an hour ago"?
I could imagine that "added" refers to the insert time, but which time is used for "as of an hour ago"?
mo...@google.com <mo...@google.com> #55
"as of hour ago" literally means - you get the data that you would've gotten if you queried that table an hour ago. I.e. it is a snapshot of the table at specified time. In Legacy SQL it was called "snapshot decorator" - https://cloud.google.com/bigquery/table-decorators#snapshot_decorators
[Deleted User] <[Deleted User]> #56
oh, I see. Thank you.
fe...@lindenlab.com <fe...@lindenlab.com> #57
Is there a different thread for requesting "since a minute ago"? That's a requirement for realtime reporting? That was one of the reasons we chose Bigquery. Please advise.
[Deleted User] <[Deleted User]> #58
I would like to highlight once again that feature parity between the old and the new versions is paramount.
If something is dropped from the new version, the existing customers are effectively locked into the old world. And that leads either to locking Google into the old world as well or losing the customer with forced migration.
I have observed the relaxed attitude to maintaining feature parity across the number of products, which is quite concerning.
If something is dropped from the new version, the existing customers are effectively locked into the old world. And that leads either to locking Google into the old world as well or losing the customer with forced migration.
I have observed the relaxed attitude to maintaining feature parity across the number of products, which is quite concerning.
mo...@google.com <mo...@google.com> #59
Please note, that there is no forced migration from Legacy SQL to Standard SQL, and Legacy SQL is not deprecated, so if you still have features in Standard SQL missing (like range decorator), you are not forced to move, and can still use Legacy SQL and have full support and SLO for it.
gq...@brightcove.com <gq...@brightcove.com> #60
That's all well and good, but you've doubled out cost already when we have to do part of out processing in legacy, and part of it in standard. We depend on functionality of both and abandoning backward compatibility is abandoning responsibility to the customers that made your product successful in the first place.
[Deleted User] <[Deleted User]> #61
Would like to share in the sentiment of the #60 - <gq...@brightcove.com>.
And while there is no _explicit_ push to use standard SQL, my understanding is that the new features are added into the new version only.
Also would like to note that corporate decision makers are extremely risk averse - operational and functional stability is paramount for the business operations, where they do not care much for technical details, but care about the sudden cost increases.
Again, this particular case is a single episode of the general trend - and that's concerning.
And while there is no _explicit_ push to use standard SQL, my understanding is that the new features are added into the new version only.
Also would like to note that corporate decision makers are extremely risk averse - operational and functional stability is paramount for the business operations, where they do not care much for technical details, but care about the sudden cost increases.
Again, this particular case is a single episode of the general trend - and that's concerning.
mo...@google.com <mo...@google.com> #62
Ack that. Backward compatibility of features is not abandoned - legacy SQL decorators are the trickiest feature to support correctly in standard SQL, so it is one of the last ones remaining. I am not quite sure what "doubled cost" refers to - we charge same cost for legacy and standard SQL queries, and they can be mixed freely on the same data.
While we do have internal issue tracked for equivalent of "range decorators", I don't see public one. I don't mind if we keep using this issue to cover range as well once snapshot ships, or if someone opens new public issue.
While we do have internal issue tracked for equivalent of "range decorators", I don't see public one. I don't mind if we keep using this issue to cover range as well once snapshot ships, or if someone opens new public issue.
gq...@brightcove.com <gq...@brightcove.com> #63
While the cost per query has not changed, the number of queries to perform a task has doubled. Part 1 has to run in legacy SQL to use the decorators, and part 2 has to run in standard sql to perform array transforms and structure constructions. This could be accomplished in a single chained set of queries (or nested queries in legacy), but the logic has to be split into two parts in order to use decorators on one and structures on the other. I hope this makes clear the impact this lack of functional compatibility is causing.
mo...@google.com <mo...@google.com> #64
Thanks for explanations. Basically we could have waited to release Standard SQL until it had full feature parity with Legacy SQL, but then you wouldn't have been able to perform your task, because Legacy SQL lacks robust array and struct transformations. This probably would've been worse situation than the current one. And one of the major reasons to invest into Standard SQL was to allow such array and struct transformations (they are possible to port into Legacy SQL, because Legacy SQL's data model doesn't even have concept of arrays and structs).
I am also interested to learn about cost increases mentioned in #61 - is it similar case or something else ?
I am also interested to learn about cost increases mentioned in #61 - is it similar case or something else ?
[Deleted User] <[Deleted User]> #65
From my perspective waiting until Standard SQL is fully operational would be the preferred option.
The corporate environment, while caring for technical ingenuity, does NOT put it as the priority number one. Priority number one is stability and predictability.
The shape Standard SQL has been made available initially led to a few issues on our side as well. One example - we started to implement a complex solution on Standard SQL only to find out at the end of the process that there was a showstopper bug with unknown ETA for the fix.
Had to re-implement using the Legacy SQL, losing about 4 days. The fix took more than a month to arrive.
https://issuetracker.google.com/code/p/google-bigquery/issues/detail?id=865
The corporate environment, while caring for technical ingenuity, does NOT put it as the priority number one. Priority number one is stability and predictability.
The shape Standard SQL has been made available initially led to a few issues on our side as well. One example - we started to implement a complex solution on Standard SQL only to find out at the end of the process that there was a showstopper bug with unknown ETA for the fix.
Had to re-implement using the Legacy SQL, losing about 4 days. The fix took more than a month to arrive.
mo...@google.com <mo...@google.com> #66
Staying with Legacy SQL is perfectly valid approach, and let me reiterate that Legacy SQL is fully supported and not deprecated. Users who want to be more careful, can stay with it. We intentionally made it easy to mix and match dialects in the API, because we thought it would help incremental transition for those who want to transition. Perhaps we should add project option to be able to lock project in specific dialect (either Legacy or Standard).
[Deleted User] <[Deleted User]> #67
Adding an option to lock the SQL version is a great idea. Having it as an organisation policy in IAM would work well for us.
gq...@brightcove.com <gq...@brightcove.com> #68
fully supported and not deprecated... yet...
When you call a product legacy it's for a reason. Let's not pretend that the intent was to support legacy forever.
If you read comment #5 :
"Range based decorators are more problematic than point in time, and are not targeted for standard SQL at this time."
It shows that the plan to support decorators in standard SQL, and to what extent, is still evolving. I suspect that the popularity of this feature to your customer base was something that surprised the Standard SQL development team.
When you call a product legacy it's for a reason. Let's not pretend that the intent was to support legacy forever.
If you read
"Range based decorators are more problematic than point in time, and are not targeted for standard SQL at this time."
It shows that the plan to support decorators in standard SQL, and to what extent, is still evolving. I suspect that the popularity of this feature to your customer base was something that surprised the Standard SQL development team.
mo...@google.com <mo...@google.com> #69
The dialect is called "Legacy" to highlight that it is previous and less standard compliant dialect as compared to "Standard" one. It is not deprecated. "Forever" is a long time from now to be able to predict which features might be deprecated in the future, but if any of them will be - they will be subject to Google Cloud Deprecation Policy described at https://cloud.google.com/terms/
Indeed range decorators is one of the least popular features in BigQuery, with usage both by query volume and active projects around 1%. However even though that usage is small, it is not insignificant, and for users who depend on it their no immediate replacement. At this time there is no solution for range decorators in Standard SQL, we will keep looking into how to make them possible. Your feedback in this thread is a motivation to accelerate that effort.
Indeed range decorators is one of the least popular features in BigQuery, with usage both by query volume and active projects around 1%. However even though that usage is small, it is not insignificant, and for users who depend on it their no immediate replacement. At this time there is no solution for range decorators in Standard SQL, we will keep looking into how to make them possible. Your feedback in this thread is a motivation to accelerate that effort.
[Deleted User] <[Deleted User]> #70
I think the problem is that either of the dialects isn't a subset or superset of the other, but rather they share some features and have some native ones. For example, if I wanted to get a complex result with the discussed decorators, I couldn't do it, because the standard dialect's decorator support is incomplete, while the legacy dialect cannot output multiple repeated fields.
I also find that the standard dialect is incredibly poorly documented, and I seriously struggle with accomplishing what I easily do in the legacy dialect, even with the help of documentation, Google Search, StackOverflow, etc, so it could be that most of the features are there, but they're not accessible.
I also find that the standard dialect is incredibly poorly documented, and I seriously struggle with accomplishing what I easily do in the legacy dialect, even with the help of documentation, Google Search, StackOverflow, etc, so it could be that most of the features are there, but they're not accessible.
mo...@google.com <mo...@google.com> #71
Re: documentation - could you please file a bug against documentation citing what you wanted to accomplish but couldn't find documentation for. Strictly by volume - we have more documentation written for Standard SQL than for Legacy SQL, but it is hard for us to see missing content, since we are so used to Standard SQL ourselves - so your concrete feedback will be very valuable.
fe...@lindenlab.com <fe...@lindenlab.com> #72
fwiw, the reason having the table decorator, specifically decorator built as "since last timestamp", feature work in standard sql is highly useful is that we need to use that in realtime processes with incoming logs that have complex repeated fields. Breaking up one sql into two in order to do one part with legacy and the next with std sql is time wise costly (requires extra db access to push the output of the first sql as input to the second sql). And as you know, performance is very important for real-time processing.
I am guessing the reason you have not seen it used much is that using BQ for realtime reporting is probably fairly recent. Mostly people use time series db for real time reporting. But for us, which have a lot of real time processing to implement on our new database in BQ, this was one of the features that made us choose BQ over other dbs.
I am guessing the reason you have not seen it used much is that using BQ for realtime reporting is probably fairly recent. Mostly people use time series db for real time reporting. But for us, which have a lot of real time processing to implement on our new database in BQ, this was one of the features that made us choose BQ over other dbs.
[Deleted User] <[Deleted User]> #73
Having the FOR SYSTEM TIME AS OF range decorators in production now is good to have.
But as a lot of commenters here say: The range based decorators are still missing. We will need this in the (near) future to be able to do cost-controlled 'near realtime monitoring' on our streaming event data. So, we too need something like: 'since last timestamp'.
But as a lot of commenters here say: The range based decorators are still missing. We will need this in the (near) future to be able to do cost-controlled 'near realtime monitoring' on our streaming event data. So, we too need something like: 'since last timestamp'.
mo...@google.com <mo...@google.com> #74
For everybody following along: We are in the process of designing Standard SQL support for equivalent of Legacy SQL's table range decorators, and would like to get additional feedback. Do you use range decorators for data written through streaming, or do you also rely on data written through small frequent load jobs (or results of query). I.e. would it be acceptable if Standard SQL's solution only worked with streaming data, but it would work without anomalies that exist with Legacy SQL's range decorators.
[Deleted User] <[Deleted User]> #75
My use case is pulling hourly aggregations from a load which appends ~50GB every 10 min. This is a standard job per 10 min, not streaming. Therefore my vote would be to have this work over batch loaded data as well.
[Deleted User] <[Deleted User]> #76
Our use case for range decorators is almost exclusively for data written through streaming.
bl...@tableau.com <bl...@tableau.com> #77
My use case for range decorators is mainly for querying data written to a streaming table... but having the flexibility to use it for batch or streaming would definitely be preferred.
gq...@brightcove.com <gq...@brightcove.com> #78
We use both, depending on the volume of data. For small volumes we stream. for very large volumes (5 billion rows per table per day) we use files.
mo...@google.com <mo...@google.com> #79
Thanks for replies - one more question, please. Currently BigQuery supports partitioning by date granularity (https://cloud.google.com/bigquery/docs/partitioned-tables ). If BigQuery supported by hour granularity - would you be able to use that as replacement for range decorators. I.e. query data in the last hour (or 2 hours) partitions - such queries will be only billed for data scan in these hourly partitions.
[Deleted User] <[Deleted User]> #80
This pattern is our current work around, with manually maintained hourly tables. It does get the job done, but it's not really a substitute. One needs to be careful to coordinate load & aggregation jobs to keep everything sane. And for aggregations that run more often than hourly we load into two targets -- one appends to the raw table, a second creates a short-lived staging table that we can aggregate from. It might feel hacky but it's reliable and loads are free.
Although we could have built this "10 minute" use case on streaming, everything starts as files in GS so a load job is quick and reliable. I would revert back to legacy SQL but I built STRUCT columns into the target schema so I'm stuck. BigQuery is a great product; I'm not unhappy with the state of affairs. C'est la vie.
Although we could have built this "10 minute" use case on streaming, everything starts as files in GS so a load job is quick and reliable. I would revert back to legacy SQL but I built STRUCT columns into the target schema so I'm stuck. BigQuery is a great product; I'm not unhappy with the state of affairs. C'est la vie.
ke...@gmail.com <ke...@gmail.com> #81
For use cases not requiring the lowest latency possible, it is cheaper and easier to debug to use microbatching. Therefore, definitely a +1 for batch-loaded data support.
That said, hourly partitioning is also a +1 - one additional benefit of this is that it would allow you to solve the problem of supporting different timezones in reporting without one timezone costing much more - in cost and performance - than the other (due to having to hit two daily partitions instead of one).
That said, hourly partitioning is also a +1 - one additional benefit of this is that it would allow you to solve the problem of supporting different timezones in reporting without one timezone costing much more - in cost and performance - than the other (due to having to hit two daily partitions instead of one).
bl...@tableau.com <bl...@tableau.com> #82
Would hourly partitioning solve for table snapshots using table decorators though?
https://cloud.google.com/bigquery/table-decorators#snapshot_decorators
Those snapshots have been one of my uses cases for recovering deleted tables after the 2-day recovery period expires. I'd love for them to exist in standard SQL as well -- but I suppose that may not be a huge use-case.
Those snapshots have been one of my uses cases for recovering deleted tables after the 2-day recovery period expires. I'd love for them to exist in standard SQL as well -- but I suppose that may not be a huge use-case.
mo...@google.com <mo...@google.com> #83
Snapshot decorators are already implemented in Standard SQL through "FOR SYSTEM TIME AS OF" standard construct (https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#from-clause ) - see update #50 in this issuetracker.
[Deleted User] <[Deleted User]> #84
For us, the use case is streaming / near realtime reporting that is not to expensive.
Having hourly based time partitioning VS day based would already improve this!
Having hourly based time partitioning VS day based would already improve this!
gq...@brightcove.com <gq...@brightcove.com> #85
Our use case is that we track the timestamp of the event, and the timestamp of insertion (through the timestamps of the file load job). Then, for a given 5 minute event time window, we select the min and max insertion timestamps and use those as decorators to extract the time slice from the daily table. For this use case, hourly partitions are of marginal use, since they would be based on the insertion timestamps and what we are interested is the event time stamps. Even 5 minute partitions would not help in our case.
[Deleted User] <[Deleted User]> #86
We use bigquery on data frequently loaded from load jobs (approx. 10gb per hour in 4 different tables). Our queries are based on hourly data so we would really appreciate having the data by hour granularity.
[Deleted User] <[Deleted User]> #87
For Bluecore: Hourly partitioning would cover about 95% of our use cases. The only one I can think of off the top of my head that it wouldn't cover is ad-hoc debugging, where we end up looking at the most recent ~10-20 minutes some times.
fe...@lindenlab.com <fe...@lindenlab.com> #88
Our use case is only for streaming data.
ti...@gmail.com <ti...@gmail.com> #89
Is this an engineering or accounting problem? How about just charge less for special case of WHERE DATE > X when records have been added sequentially by DATE?
hi...@gmail.com <hi...@gmail.com> #90
+1 Table decorators in standard SQL
po...@gmail.com <po...@gmail.com> #91
Shouldn't the documentation be updated to reflect the fact that snapshot decorators are now supported!? (https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#from-clause )
https://cloud.google.com/bigquery/table-decorators
"Table decorators are currently unsupported in standard SQL" is incorrect. Should be "*Range based* table decorators are currently unsupported in standard SQL"
"Table decorators are currently unsupported in standard SQL" is incorrect. Should be "*Range based* table decorators are currently unsupported in standard SQL"
mo...@google.com <mo...@google.com> #92
Technically documentation is correct - snapshot decorators are not supported in Standard SQL - there is alternative syntax which achieves the same functionality (and it is indeed documented), but it isn't decorators syntax.
po...@gmail.com <po...@gmail.com> #93
I see.
Do you not think it's worth at least mentioning the "FOR SYSTEM TIME AS OF" in the documentation so people can easily know it can be achieved now in standard SQL?
Do you not think it's worth at least mentioning the "FOR SYSTEM TIME AS OF" in the documentation so people can easily know it can be achieved now in standard SQL?
na...@nytimes.com <na...@nytimes.com> #94
+1 A must have feature.
[Deleted User] <[Deleted User]> #95
+1 for a FOR SYSTEM TIME SINCE feature in Standard SQL. We're tied to Legacy SQL as we use Range Decorators on streaming data.
[Deleted User] <[Deleted User]> #96
My suggestion would be to approach it in the same way as the table suffixes, e.g. WHERE _SYSTEM_TIME = ... or WHERE _SYSTEM_TIME BETWEEN ... AND ...
ju...@google.com <ju...@google.com> #97
+1
[Deleted User] <[Deleted User]> #98
+1
fi...@gmail.com <fi...@gmail.com> #99
+1
ch...@gmail.com <ch...@gmail.com> #100
+1
ad...@gmail.com <ad...@gmail.com> #101
+1
ma...@ml6.eu <ma...@ml6.eu> #102
+1
sc...@wunderkind.co <sc...@wunderkind.co> #103
+1
za...@sadan.me <za...@sadan.me> #104
+1
za...@protected.media <za...@protected.media> #105
+1
sm...@gmail.com <sm...@gmail.com> #106
+1
mr...@gmail.com <mr...@gmail.com> #107
+1
dm...@gmail.com <dm...@gmail.com> #108
+1
[Deleted User] <[Deleted User]> #109
+1
ge...@gmail.com <ge...@gmail.com> #110
+1
[Deleted User] <[Deleted User]> #111
+1
ph...@gmail.com <ph...@gmail.com> #112
+1
no...@gmail.com <no...@gmail.com> #113
+1
po...@gmail.com <po...@gmail.com> #114
+1
da...@myersholum.com <da...@myersholum.com> #115
+1
er...@condenast.com <er...@condenast.com> #116
+1 Table decorators in standard SQL
st...@egym.com <st...@egym.com> #117
+1
ja...@gmail.com <ja...@gmail.com> #118
+1
ak...@google.com <ak...@google.com> #119
+1
yk...@google.com <yk...@google.com> #120
+1
yu...@fastretailing.com <yu...@fastretailing.com> #121
+1
ry...@fastretailing.com <ry...@fastretailing.com> #122
+1
dh...@postmates.com <dh...@postmates.com> #123
+1
[Deleted User] <[Deleted User]> #124
+1
[Deleted User] <[Deleted User]> #125
+1
[Deleted User] <[Deleted User]> #126
I was wondering if there is any update on this issue? Comments #74 and #79 suggest that it was being actively worked on.
[Deleted User] <[Deleted User]> #127
+1
an...@unity3d.com <an...@unity3d.com> #128
+1
pm...@atso.com <pm...@atso.com> #129
+1
[Deleted User] <[Deleted User]> #130
+1
th...@gmail.com <th...@gmail.com> #131
+1
ms...@bol.com <ms...@bol.com> #132
Snapshot Decorators are implemented for standard SQL since a while, as described here: https://stackoverflow.com/a/54188924/6203099
However, Range Decorators are not implemented (yet?) in standard SQL.
However, Range Decorators are not implemented (yet?) in standard SQL.
fz...@gmail.com <fz...@gmail.com> #133
All along I thought this would also cover "since" decorator, not just "as of" decorator in standard sql. Does Sandard Sql offer "since" decorator?
aw...@gmail.com <aw...@gmail.com> #134
+1- any update on this?
al...@nine.com.au <al...@nine.com.au> #135
+ 1
th...@ozon.io <th...@ozon.io> #136
+1
[Deleted User] <[Deleted User]> #137
+1
l....@gmail.com <l....@gmail.com> #138
+1
gp...@bendingspoons.com <gp...@bendingspoons.com> #139
+1
[Deleted User] <[Deleted User]> #140
+1
il...@weel.com <il...@weel.com> #141
+1
ku...@google.com <ku...@google.com> #142
+1
wa...@soundcloud.com <wa...@soundcloud.com> #143
+1
ra...@gmail.com <ra...@gmail.com> #144
+1
ro...@gmail.com <ro...@gmail.com> #145
+1
74...@gmail.com <74...@gmail.com> #146
+1
fl...@gmail.com <fl...@gmail.com> #147
+1
pe...@aliz.ai <pe...@aliz.ai> #148
deleted
[Deleted User] <[Deleted User]> #149
+1
[Deleted User] <[Deleted User]> #150
+1
tr...@veolia.com <tr...@veolia.com> #151
+1
[Deleted User] <[Deleted User]> #152
+1
pr...@google.com <pr...@google.com> #153
+1
on...@trendyol.com <on...@trendyol.com> #154
+1
mu...@trendyol.com <mu...@trendyol.com> #155
that would be great!
+1
+1
[Deleted User] <[Deleted User]> #156
+1
fa...@trendyol.com <fa...@trendyol.com> #157
+1
[Deleted User] <[Deleted User]> #158
+1
[Deleted User] <[Deleted User]> #159
+1
dm...@snapchat.com <dm...@snapchat.com> #160
+1
st...@google.com <st...@google.com> #161
+1
ma...@motorola.com <ma...@motorola.com> #162
+1
bh...@motorola.com <bh...@motorola.com> #163
+1
ma...@motorola.com <ma...@motorola.com> #164
+1
[Deleted User] <[Deleted User]> #165
+1
ve...@google.com <ve...@google.com>
el...@gmail.com <el...@gmail.com> #166
+1
pr...@codechilli.lk <pr...@codechilli.lk> #167
+1
bw...@google.com <bw...@google.com>
vi...@trmlabs.com <vi...@trmlabs.com> #168
+1
kn...@exchacc.ericsson.com <kn...@exchacc.ericsson.com> #169
+1
ju...@king.com <ju...@king.com> #170
+1
[Deleted User] <[Deleted User]> #171
+1
li...@cityswift.com <li...@cityswift.com> #172
+1
ju...@smartproxy.com <ju...@smartproxy.com> #173
+1
ni...@nordeus.com <ni...@nordeus.com> #174
Please do this, it will save a lot of pain, thanks!
ag...@ourgapps.com <ag...@ourgapps.com> #175
Waiting for it
ma...@gmail.com <ma...@gmail.com> #176
Still waiting........................................................................................ and waiting..................... and waiting................
bw...@google.com <bw...@google.com>
ni...@google.com <ni...@google.com> #177
Re-assigning this feature request to Candice, as this is in her wheelhouse.
bw...@google.com <bw...@google.com> #178
Actually, this is provided though change history:
Description