Assigned
Status Update
Comments
di...@gmail.com <di...@gmail.com> #2
As a corollary, for properties where unique=True one
should be able to fetch an entity by doing
Model.get_by_uniquepropertyname and also
Model.get_or_insert_by_uniquepropertyname... like Rails does with
ActiveRecord (e.g., Book.find_or_create_by_isbn).
should be able to fetch an entity by doing
Model.get_by_uniquepropertyname and also
Model.get_or_insert_by_uniquepropertyname... like Rails does with
ActiveRecord (e.g., Book.find_or_create_by_isbn).
di...@gmail.com <di...@gmail.com> #3
I'd rather see this added as an index feature, so the uniqueness could be spread
across multiple fields (and an index would almost certainly be needed to support this
efficiently, anyway).
across multiple fields (and an index would almost certainly be needed to support this
efficiently, anyway).
la...@gmail.com <la...@gmail.com> #4
I totally agree that this should be implemented as an index. It could (and I think
should) still be specified in the schema/property constructor and the index
automatically generated.
should) still be specified in the schema/property constructor and the index
automatically generated.
la...@bluemetis.com <la...@bluemetis.com> #5
It would also be great to have a way, in the schema declarations, to specify
composite unique indexes/properties, especially for reference properties.
composite unique indexes/properties, especially for reference properties.
de...@gmail.com <de...@gmail.com> #6
Yes, this would be super useful.
la...@bluemetis.com <la...@bluemetis.com> #7
Yes, this is a very common usecase +1
ro...@gmail.com <ro...@gmail.com> #8
I need this feature.
as...@google.com <as...@google.com> #9
Can't this be done with a carefully contstructed key name?
jk...@google.com <jk...@google.com>
di...@gmail.com <di...@gmail.com> #10
No. It can't be done with a carefully constructed key name... because the key name is
immutable once it is set at creation. For example, say you have a User model with an
email address in it. You want the email address to be unique across all User
entities. If you use the email address as key name you are out of luck when the user
changes his/her email.
immutable once it is set at creation. For example, say you have a User model with an
email address in it. You want the email address to be unique across all User
entities. If you use the email address as key name you are out of luck when the user
changes his/her email.
jo...@gmail.com <jo...@gmail.com> #11
no i don't think so, because keys are used to reference objects, so if you change the
key you have to update all referencing properties.
uniqueness is not the same use-case as a primary key
key you have to update all referencing properties.
uniqueness is not the same use-case as a primary key
[Deleted User] <[Deleted User]> #12
hi all! we've discussed this a bit internally, and we agree, it would be a useful
feature. it's definitely one of the most widely used constraints in SQL schemas.
unfortunately, it would be pretty difficult to implement with the current datastore
architecture, due in large part to our index consistency model, described in
http://code.google.com/appengine/articles/transaction_isolation.html . we might have
to chalk this up as another place the datastore doesn't provide traditional
relational database features.
at first, the key_name approach seemed tempting to me too. unfortunately, as edoardo
and bernddorn mention, it effectively makes those properties read only. having said
that, i suspect that "unique" properties like these actually are often read only, so
that might not be quite as big a drawback.
out of curiosity, would this feature be useful to anyone if it was limited to entity
groups? ie, you could specify that property values must be unique within each
individual entity group, but they could be repeated across entity groups?
feature. it's definitely one of the most widely used constraints in SQL schemas.
unfortunately, it would be pretty difficult to implement with the current datastore
architecture, due in large part to our index consistency model, described in
to chalk this up as another place the datastore doesn't provide traditional
relational database features.
at first, the key_name approach seemed tempting to me too. unfortunately, as edoardo
and bernddorn mention, it effectively makes those properties read only. having said
that, i suspect that "unique" properties like these actually are often read only, so
that might not be quite as big a drawback.
out of curiosity, would this feature be useful to anyone if it was limited to entity
groups? ie, you could specify that property values must be unique within each
individual entity group, but they could be repeated across entity groups?
ar...@gmail.com <ar...@gmail.com> #13
ryanb: even properties that are thought to be immutable/readonly (like unique
identifiers, say ISSN/ISBN etc) may actually change in the future (and they did).
That is why, in general, it is a very bad idea to use properties as key_names (that's
the all thinking behind using a simple integer as primary key in relational database
design). Moreover, there are unique properties (say email addresses) that you want
unique and are most likely to change.
The correct way of implementing this would be to have the unique constraint apply
within all entities of a specific KIND, not just an entity group as you suggested.
Another way of going at it would be to be able to query the datastore within a
transaction.
identifiers, say ISSN/ISBN etc) may actually change in the future (and they did).
That is why, in general, it is a very bad idea to use properties as key_names (that's
the all thinking behind using a simple integer as primary key in relational database
design). Moreover, there are unique properties (say email addresses) that you want
unique and are most likely to change.
The correct way of implementing this would be to have the unique constraint apply
within all entities of a specific KIND, not just an entity group as you suggested.
Another way of going at it would be to be able to query the datastore within a
transaction.
la...@bluemetis.com <la...@bluemetis.com> #14
for me this would be totally ok, afaik an entity-group is needed anyways to check
uniqueness in a transaction safe manner, isnt it?
even though in my applications i do not check uniqueness transaction safe. i thought
that if this would happen on the datastore end it would be possible implement it on
the storage/index level, which would make it much less error prune than doing it on
the application level via properties.
uniqueness in a transaction safe manner, isnt it?
even though in my applications i do not check uniqueness transaction safe. i thought
that if this would happen on the datastore end it would be possible implement it on
the storage/index level, which would make it much less error prune than doing it on
the application level via properties.
el...@gmail.com <el...@gmail.com> #15
Transaction isn't needed to guarantee a property is unique. Do a put and followed by
a query on that property, if count is greater than 1, delete self. If you worry about
exception, you can mark one property of the new object to indicate it's new and flip
it once verified unique. If there's contention of put, both object should delete them
self, there shouldn't be 2 object both marked old but with same property value.
Of course solve this on index level is much efficient.
a query on that property, if count is greater than 1, delete self. If you worry about
exception, you can mark one property of the new object to indicate it's new and flip
it once verified unique. If there's contention of put, both object should delete them
self, there shouldn't be 2 object both marked old but with same property value.
Of course solve this on index level is much efficient.
ig...@chiletributa.cl <ig...@chiletributa.cl> #16
question: Are entity ID's guaranteed to be unique?
If so, would it be possible to create a property that does something like:
entity.index = eventual_id
instead of:
entity.put()
entity.index = entity.key().id()
If so, would it be possible to create a property that does something like:
entity.index = eventual_id
instead of:
entity.put()
entity.index = entity.key().id()
[Deleted User] <[Deleted User]> #17
[Comment deleted]
[Deleted User] <[Deleted User]> #18
generally, no, IDs are not unique. however, they will be unique for entities with the
same kind and the same parent, or root entities of the same kind.
in general, the full path is the only thing that's always guaranteed to be unique.
see
http://code.google.com/appengine/docs/datastore/keysandentitygroups.html#Paths_and_Key_Uniqueness
.
same kind and the same parent, or root entities of the same kind.
in general, the full path is the only thing that's always guaranteed to be unique.
see
.
mu...@ajegroup.com <mu...@ajegroup.com> #19
I think this "compromise" is pretty much useless. The whole point of having unique
entities is to have some slightly more complex guarantees about consistency than what
you currently provide with key_name. Is this feature really impossible to implement
in a scalable way or is this a specific problem of bigtable or the datastore?
What about allowing to safely change the key_name? Could you somehow allow for
locking two entity groups at the same time? If a deadlock occurs you could just
revert one of the transactions and let the other continue to run. This might need
some more thought though. :)
entities is to have some slightly more complex guarantees about consistency than what
you currently provide with key_name. Is this feature really impossible to implement
in a scalable way or is this a specific problem of bigtable or the datastore?
What about allowing to safely change the key_name? Could you somehow allow for
locking two entity groups at the same time? If a deadlock occurs you could just
revert one of the transactions and let the other continue to run. This might need
some more thought though. :)
ko...@gmail.com <ko...@gmail.com> #20
I ran into this and between a thread in the groups and this ticket, it's been
helpful. I did come up with a solution for my specific use case. I'm using a floating
point number as a score and need it to be unique because I am paging off of the
value. I also needed it to not be immutable, because scores can change. What I did
was override the put method, with a check to confirm the value being unique and if
not, then adjusted it and tried again.
def put(object):
valid = False
while valid == False:
cquery = db.GqlQuery('SELECT * FROM Story WHERE score = :1', object.score)
results = cquery.fetch(1)
if len(results) > 0:
value = value + .01
else:
valid = True
super(Story, Story).put(object)
One thing that might be useful from the appengine standpoint would be to add an
attribute similar to validator for properties, like unique_validator = None, which
could then be a hook for developers to create their own functions. Or if you needed a
more generic function that could raise an exception, it could be something similar to
the above, except it would raise an exception on the check for an existing value, and
it could then be up to the developer to catch the exception and adjust the value
accordingly before reattempting the put operation. Then you could just have a unique
= True/False attribute.
helpful. I did come up with a solution for my specific use case. I'm using a floating
point number as a score and need it to be unique because I am paging off of the
value. I also needed it to not be immutable, because scores can change. What I did
was override the put method, with a check to confirm the value being unique and if
not, then adjusted it and tried again.
def put(object):
valid = False
while valid == False:
cquery = db.GqlQuery('SELECT * FROM Story WHERE score = :1', object.score)
results = cquery.fetch(1)
if len(results) > 0:
value = value + .01
else:
valid = True
super(Story, Story).put(object)
One thing that might be useful from the appengine standpoint would be to add an
attribute similar to validator for properties, like unique_validator = None, which
could then be a hook for developers to create their own functions. Or if you needed a
more generic function that could raise an exception, it could be something similar to
the above, except it would raise an exception on the check for an existing value, and
it could then be up to the developer to catch the exception and adjust the value
accordingly before reattempting the put operation. Then you could just have a unique
= True/False attribute.
el...@gmail.com <el...@gmail.com> #21
In JPA (Java) there is an annotation @UniqueConstraint [1] that can be put on the
class-level table annotation to identify which properties should be unique (either
alone or alongside others). This is usually used in addition to property-level
annotations, which have that unique=True/False property.
Before calling put(), the information from the annotation (or a simple property
_unique_constraints) is used to issue some queries, and, if no constraints are
violated, the entity is stored. If any constraint is violated an appropriate error
can be raised.
I have implemented a very basic version of my suggestion, which is enough for what I
need:
## start
class DbEntity(db.Model):
_unique_properties = None
timestamp = db.DateTimeProperty(auto_now=True)
def __init__(self, parent=None, key_name=None, _app=None, unique_properties = None,
**kwds):
super(DbEntity, self).__init__(parent, key_name, _app, **kwds)
if unique_properties:
logging.debug("Unique properties: %s" % unique_properties)
self._unique_properties = unique_properties
def put(self):
if self._unique_properties:
logging.debug('checking unique properties for : %s ...' % self.__class__)
for unique in self._unique_properties:
gqlString = 'WHERE %s = :1' % unique
param = eval('self.%s' % unique)
logging.info ('GQL: self.gql("%s", %s)' % (gqlString, param))
query = self.gql(gqlString, eval('self.%s' % unique))
otherObjects = query.fetch(10)
if len(otherObjects) > 0:
logging.error("Objects that violate the constraints: %s" % otherObjects)
raise db.BadPropertyError("Other instances of %s exist with %s = %s" %
(self.__class__, unique, param))
return super(DbEntity, self).put()
class Team(DbEntity):
name = db.StringProperty(required=True)
def __init__(self, parent=None, key_name=None, _app=None, **kwds):
# set the unique property names
super(Team, self).__init__(unique_properties = ['name'], **kwds)
## end
[1]http://java.sun.com/javaee/5/docs/api/javax/persistence/UniqueConstraint.html
class-level table annotation to identify which properties should be unique (either
alone or alongside others). This is usually used in addition to property-level
annotations, which have that unique=True/False property.
Before calling put(), the information from the annotation (or a simple property
_unique_constraints) is used to issue some queries, and, if no constraints are
violated, the entity is stored. If any constraint is violated an appropriate error
can be raised.
I have implemented a very basic version of my suggestion, which is enough for what I
need:
## start
class DbEntity(db.Model):
_unique_properties = None
timestamp = db.DateTimeProperty(auto_now=True)
def __init__(self, parent=None, key_name=None, _app=None, unique_properties = None,
**kwds):
super(DbEntity, self).__init__(parent, key_name, _app, **kwds)
if unique_properties:
logging.debug("Unique properties: %s" % unique_properties)
self._unique_properties = unique_properties
def put(self):
if self._unique_properties:
logging.debug('checking unique properties for : %s ...' % self.__class__)
for unique in self._unique_properties:
gqlString = 'WHERE %s = :1' % unique
param = eval('self.%s' % unique)
query = self.gql(gqlString, eval('self.%s' % unique))
otherObjects = query.fetch(10)
if len(otherObjects) > 0:
logging.error("Objects that violate the constraints: %s" % otherObjects)
raise db.BadPropertyError("Other instances of %s exist with %s = %s" %
(self.__class__, unique, param))
return super(DbEntity, self).put()
class Team(DbEntity):
name = db.StringProperty(required=True)
def __init__(self, parent=None, key_name=None, _app=None, **kwds):
# set the unique property names
super(Team, self).__init__(unique_properties = ['name'], **kwds)
## end
[1]
th...@gmail.com <th...@gmail.com> #22
re: Comment 23
This code contains a race condition and will not ensure uniqueness (it should, most
of the time, but there's no uniqueness guarantee).
This code contains a race condition and will not ensure uniqueness (it should, most
of the time, but there's no uniqueness guarantee).
kz...@gmail.com <kz...@gmail.com> #23
sorry my bad english but ehy me get 2 insert when me wait 1
th...@barsoom.net <th...@barsoom.net> #24
This is a total kludge but what if you did memcache.add("dblock", "dblock", 10), then do a query to see if there are any existing objects in the datastore with your unique fields before doing the db.put() and then memcache.delete("dblock"). If the memcache.add() returns false, just wait a random number of milliseconds and try again.
ek...@google.com <ek...@google.com>
jo...@gmail.com <jo...@gmail.com> #25
Yes, unique constraints are critical to certain use cases. My problem is that I allow the datastore to create unique, immutable numeric IDs for the Key, which are used for entity relationships, but I need mutable secondary key for human readable keys that may change. I ran into a case where a request was repeated quickly (within 50ms) and so two JVMs handled them simultaneously. Within a transaction, the code queried on the secondary key to see if it already existed. Both handlers got null, since the entity did not yet exist. They then created it and the datastore assigned two unique numeric primary keys to the new entities. This left me with two entities with duplicate secondary keys.
I understand there is a technique that involves creating an auxiliary entity to be used as a mutex for the secondary key. In JDO/JPA that is a bunch of ugly, obfuscating code that requires extra java classes, etc. JDO/JPA pretty much assume the semantics of a full SQL datastore, so this kind of kludge really shows that JDO/JPA is not a great solution for the Google datastore. The technique is much less offensive if you are using the low level api. But, all my code now uses JDO, so it it possible to implement the uniqueness annotation?
From my point of view, it would be OK to require an index and/or assume that writes to such entities would take longer. I understand that the stage 1 (commit phase) of write does not update indices, so that (as it now stands), the entity is effectively committed (logged and visible to get()) prior to any indices being updated. Indices are updated in phase 2, which can occur after the commit call returns. However, uniqueness constraints are very important, so much so that it would be OK with me that my commit waited for phase2 to finish before it returned. This would allow the indexing phase to enforce the uniqueness constraint. It would also be ok that the non-unique entity committed in phase 1 is visible to get() until phase2 rolls it back.
At at the very least, having a uniqueness constraint that only allows ONE of the entities into the index might be good enough. In other words, if I query by secondary key, I will get a single entity and it will always be the same one.
I understand there is a technique that involves creating an auxiliary entity to be used as a mutex for the secondary key. In JDO/JPA that is a bunch of ugly, obfuscating code that requires extra java classes, etc. JDO/JPA pretty much assume the semantics of a full SQL datastore, so this kind of kludge really shows that JDO/JPA is not a great solution for the Google datastore. The technique is much less offensive if you are using the low level api. But, all my code now uses JDO, so it it possible to implement the uniqueness annotation?
From my point of view, it would be OK to require an index and/or assume that writes to such entities would take longer. I understand that the stage 1 (commit phase) of write does not update indices, so that (as it now stands), the entity is effectively committed (logged and visible to get()) prior to any indices being updated. Indices are updated in phase 2, which can occur after the commit call returns. However, uniqueness constraints are very important, so much so that it would be OK with me that my commit waited for phase2 to finish before it returned. This would allow the indexing phase to enforce the uniqueness constraint. It would also be ok that the non-unique entity committed in phase 1 is visible to get() until phase2 rolls it back.
At at the very least, having a uniqueness constraint that only allows ONE of the entities into the index might be good enough. In other words, if I query by secondary key, I will get a single entity and it will always be the same one.
[Deleted User] <[Deleted User]> #26
An obvious case where you want uniqueness is with user logins. My app uses email addresses for the login. We set the key_name of our account model and use get_or_insert() to effect uniqueness.
The downfall is when the user wants to change their email address. We have to clone not just their account entity, but all the entities underneath it (of which there are potentially many thousands.) Of course, we want the user to be able to use their account during this time, so there's a lot of logic that pains me to have to keep around just to deal with the case of accounts in motion...
The downfall is when the user wants to change their email address. We have to clone not just their account entity, but all the entities underneath it (of which there are potentially many thousands.) Of course, we want the user to be able to use their account during this time, so there's a lot of logic that pains me to have to keep around just to deal with the case of accounts in motion...
ts...@gmail.com <ts...@gmail.com> #27
is there a suitable work around for java to ensure uniqueness of a property in the datamodel?
di...@gmail.com <di...@gmail.com> #28
We've used this technique on a number of our applications: http://squeeville.com/2009/01/30/add-a-unique-constraint-to-google-app-engine/
It's simple but has some possibility of leaving unique values unable to be used, though it will guarantee that a unique value is actually unique. That is, because there is no transaction between the Unique model and the model holding the unique values, a failure might cause a unique value to be used. It fails on the side of guaranteeing uniqueness.
The example is Python, but is simple enough that there will be a direct Java analogy.
It's simple but has some possibility of leaving unique values unable to be used, though it will guarantee that a unique value is actually unique. That is, because there is no transaction between the Unique model and the model holding the unique values, a failure might cause a unique value to be used. It fails on the side of guaranteeing uniqueness.
The example is Python, but is simple enough that there will be a direct Java analogy.
ts...@gmail.com <ts...@gmail.com> #29
you can also combine the technique in #30 with cross group transactions. this will protect you against losing unique values.
you create an entity group for each unique field. and then do a cross group transaction on checking/updating that entity group along with the entity group of your model.
this limits you to 4 unique fields per model, and is probably terrible for performance since you're effectively serializing writes across each field. but if you really really need uniqueness...
you create an entity group for each unique field. and then do a cross group transaction on checking/updating that entity group along with the entity group of your model.
this limits you to 4 unique fields per model, and is probably terrible for performance since you're effectively serializing writes across each field. but if you really really need uniqueness...
ma...@europestar.com.br <ma...@europestar.com.br> #30
After investigating a bit about this problem, I got two possible working solutions, that could be implemented at the application level:
@ndb.transactional
def put(self):
from google.appengine.ext.db import STRONG_CONSISTENCY
key = User.gql("WHERE ANCESTOR IS :1 AND email = :2", self.key.parent(), self.url).get(keys_only=True, read_policy=STRONG_CONSISTENCY)
assert key is None or key == self.key
ndb.Model.put(self)
def put(self):
from google.appengine.api import memcache
memcache_key = "UserEmail" + repr (self.url)
assert memcache.incr (memcache_key) is None
try:
from google.appengine.ext.db import STRONG_CONSISTENCY
key = User.gql("WHERE ANCESTOR IS :1 AND email = :2", self.key.parent(), self.url).get(keys_only=True, read_policy=STRONG_CONSISTENCY)
assert key is None or key == self.key
ndb.Model.put(self)
finally:
memcache.delete(memcache_key)
Of course I prefer the first one, and according to the documentation both should work.
Yet some people insist on the using memcache to ensure there are no race conditions.
The only reasoning I see is because of the query, but even so, in the same entity group, transaction are serializable according to the documentation, so it should work just fine. What am I missing?
@ndb.transactional
def put(self):
from google.appengine.ext.db import STRONG_CONSISTENCY
key = User.gql("WHERE ANCESTOR IS :1 AND email = :2", self.key.parent(), self.url).get(keys_only=True, read_policy=STRONG_CONSISTENCY)
assert key is None or key == self.key
ndb.Model.put(self)
def put(self):
from google.appengine.api import memcache
memcache_key = "UserEmail" + repr (self.url)
assert memcache.incr (memcache_key) is None
try:
from google.appengine.ext.db import STRONG_CONSISTENCY
key = User.gql("WHERE ANCESTOR IS :1 AND email = :2", self.key.parent(), self.url).get(keys_only=True, read_policy=STRONG_CONSISTENCY)
assert key is None or key == self.key
ndb.Model.put(self)
finally:
memcache.delete(memcache_key)
Of course I prefer the first one, and according to the documentation both should work.
Yet some people insist on the using memcache to ensure there are no race conditions.
The only reasoning I see is because of the query, but even so, in the same entity group, transaction are serializable according to the documentation, so it should work just fine. What am I missing?
[Deleted User] <[Deleted User]> #31
Just wanted to bump this issue a little bit. In Djangae (https://github.com/potatolondon/djangae/ ) we've implemented a kind of unique constraint support by creating marker instances for a unique combination. The key of the marker is constructed from the unique combination (enforcing uniqueness) and markers are acquired (in independent transactions) before a Put() and released after a Delete().
Markers also have a creation time, and an associated instance. If a marker for a unique combination already exists, then we check to see if its associated instance still exists (by doing an Ancestor, keys_only query on its key) before raising exception.
When creating new instances (where the Key id hasn't yet been generated) then we create a marker without an associated instance key, but we ignore such markers for unique constraint checking after a few seconds.
Obviously, this is a bit of a kludge, but it seems to work, and we've tried the approach on some pretty high traffic sites - it is costly though, both in performance and quota, but sometimes you really need to enforce uniqueness.
It would be much better if a similar approach could be implemented in the datastore itself. This is still a very sought after feature.
Markers also have a creation time, and an associated instance. If a marker for a unique combination already exists, then we check to see if its associated instance still exists (by doing an Ancestor, keys_only query on its key) before raising exception.
When creating new instances (where the Key id hasn't yet been generated) then we create a marker without an associated instance key, but we ignore such markers for unique constraint checking after a few seconds.
Obviously, this is a bit of a kludge, but it seems to work, and we've tried the approach on some pretty high traffic sites - it is costly though, both in performance and quota, but sometimes you really need to enforce uniqueness.
It would be much better if a similar approach could be implemented in the datastore itself. This is still a very sought after feature.
ni...@gmail.com <ni...@gmail.com> #32
Bump.
I believe this is quite an important feature to be implemented.
I have run into similar constraints without it.
I believe this is quite an important feature to be implemented.
I have run into similar constraints without it.
th...@ahamove.com <th...@ahamove.com> #33
Upvote + 1
[Deleted User] <[Deleted User]> #36
We need this!
cr...@gmail.com <cr...@gmail.com> #37
@Google, come one! This has been a known issue since 2008. Get this solved and ensure that firebase/firestore is competitive with other like database products.
di...@gmail.com <di...@gmail.com> #38
much awaited feature.
ni...@gmail.com <ni...@gmail.com> #39
Is there any good method available to get this working
ek...@google.com <ek...@google.com> #40
wow. I commented on this issue over a decade ago. I don't think there's a good fix, but now you can use Google Cloud SQL instead, and set unique=true in your table schema...
ek...@google.com <ek...@google.com>
cl...@gmail.com <cl...@gmail.com> #41
Are we getting this? Ever?
sc...@swashlabs.com <sc...@swashlabs.com> #42
Why is this infeasible? PostgreSQL is enabled as a data source for DataStudio. Why not just fully implement jdbc?
[Deleted User] <[Deleted User]> #43
please, please support postgresql!!
di...@gmail.com <di...@gmail.com> #44
guys, i hate to say it, but since I posted this request in 2011 there has been zero progress.
never. gonna. happen.
never. gonna. happen.
jo...@itaueira.com <jo...@itaueira.com> #45
Google Cloud SQL Postgresql is in Beta right now. It makes sense to support postgresql driver through JDBC,
Thanks,
Thanks,
vi...@gmail.com <vi...@gmail.com> #46
yes, we need support for Postgress sql in Google script
ra...@gmail.com <ra...@gmail.com> #47
Please Google..
db...@dawnbreaks.net <db...@dawnbreaks.net> #48
On 2 August 2017 at 13:51, <buganizer-system@google.com> wrote:
Replying to this email means your email address will be shared with the
> team that works on this product.
>
>
There's a team that works on this product?!
Replying to this email means your email address will be shared with the
> team that works on this product.
>
>
There's a team that works on this product?!
ek...@google.com <ek...@google.com> #49
Please see the history above; we decided that we will not implement this feature in Apps Script.
di...@gmail.com <di...@gmail.com> #50
hahaha yes but will you implement postgresql driver through jdbc???
this has been dead a long while folks
On Wed, Aug 2, 2017 at 12:24 PM, <buganizer-system@google.com> wrote:
this has been dead a long while folks
On Wed, Aug 2, 2017 at 12:24 PM, <buganizer-system@google.com> wrote:
cr...@gmail.com <cr...@gmail.com> #51
Well, it's a shame. Here we started to use https://datastudio.google.com/ , and PostgreSQL support works fine. It's still in development, but I bet that will be very powerful in the future.
jo...@gmail.com <jo...@gmail.com> #52
You can connect Postgresql to Google sheets with the SuperMetrics addon, only free for 30 days though.
[Deleted User] <[Deleted User]> #53
So has this become feasible in the past two years. Would have thought the brilliant minds at Google would have figured this one out.
mi...@jivrus.com <mi...@jivrus.com> #54
Not able to appreciate why Google decided not to implement PostgreSQL support in JDBC Apps Script when Google Data Studio supports it and Google Cloud Platform offers PostgreSQL as a database service.
Google, are you saying that the customers who chose Postgre from Google Cloud Platform cannot have their Google Apps Script connect to it? This is disgrace. Google, could you please review your decision in line with your support to Postgre SQL from your other product lines like Data Studio and GCP.
For me, it is a must for Apps Script to establish the connection with PostgreSQL. In fact, it is JDBC, it should support connecting to any database (DB2, etc) given we load the right JDBC driver.
Google, expecting more from you, thanks
Google, are you saying that the customers who chose Postgre from Google Cloud Platform cannot have their Google Apps Script connect to it? This is disgrace. Google, could you please review your decision in line with your support to Postgre SQL from your other product lines like Data Studio and GCP.
For me, it is a must for Apps Script to establish the connection with PostgreSQL. In fact, it is JDBC, it should support connecting to any database (DB2, etc) given we load the right JDBC driver.
Google, expecting more from you, thanks
[Deleted User] <[Deleted User]> #55
No postgres support still? Despite postgres being an offering of Googles cloud platform? This is insane!
db...@dawnbreaks.net <db...@dawnbreaks.net> #56
On 7 Mar. 2018 18:35, <buganizer-system@google.com> wrote:
Replying to this email means your email address will be shared with the
team that works on this product.
https://issuetracker.google.com/issues/36752790
*Changed*
*jo...@conxjobs.com <jo...@conxjobs.com> added comment #55
<https://issuetracker.google.com/issues/36752790#comment55 >:*
No postgres support still? Despite postgres being an offering of Googles
cloud platform? This is insane!
I see you've encountered Google's attitude to their fodder.
The sooner you realise that they are not an enterprise-grade offering, you
can either knuckle down and dance entirely to their tune, or go find a
solution that actually suits your needs.
Sorry to break it to you,
David.
Replying to this email means your email address will be shared with the
team that works on this product.
*Changed*
*jo...@conxjobs.com <jo...@conxjobs.com> added
<
No postgres support still? Despite postgres being an offering of Googles
cloud platform? This is insane!
I see you've encountered Google's attitude to their fodder.
The sooner you realise that they are not an enterprise-grade offering, you
can either knuckle down and dance entirely to their tune, or go find a
solution that actually suits your needs.
Sorry to break it to you,
David.
[Deleted User] <[Deleted User]> #57
Google please add support for Postgres through JDBC. Why are you complicating a very simple process.
jo...@tryadhawk.com <jo...@tryadhawk.com> #58
Google please add support for postgres through JDBC. I just want to automate my reports instead of spending hours every week pulling and exporting data :(
[Deleted User] <[Deleted User]> #59
How can an API layer be added to Google SQL? postgreSQL is a supported, GA platform for Google SQL.
In 2011, I am not surprised that this was infeasible, however now that postgreSQL is supported within Google SQL, I hope that some changes can be made.
In 2011, I am not surprised that this was infeasible, however now that postgreSQL is supported within Google SQL, I hope that some changes can be made.
mi...@gmail.com <mi...@gmail.com> #60
Is someone from Google listening?
On Thu, May 24, 2018 at 8:32 PM, <buganizer-system@google.com> wrote:
On Thu, May 24, 2018 at 8:32 PM, <buganizer-system@google.com> wrote:
ek...@google.com <ek...@google.com> #61
I'm still listening :-) I raised this request with the Apps Script team again in December, but it was still out of scope. If we could enable connecting only to Google Cloud SQL PostreSQL databases, not arbitrary ones, would that be enough for most folks?
[Deleted User] <[Deleted User]> #62
That would be at least consistent with what Google is offering, and would make me personally happy.
Thanks ek...@google.com!
Thanks ek...@google.com!
mb...@gmail.com <mb...@gmail.com> #63
I would need to be able to access PgSQL on arbitrary addresses. It would be ok to limit it only to newest versions if that makes it easier somehow.
It would be strange to have full MySQL support but PostgreSQL limited to Google Cloud only.
Note that Google Data Studio also supports PostgreSQL as a datasource.
It would be strange to have full MySQL support but PostgreSQL limited to Google Cloud only.
Note that Google Data Studio also supports PostgreSQL as a datasource.
ti...@tillerhq.com <ti...@tillerhq.com> #64
ek...@google.com,
Thanks for re-opening!
To specifically answer the question: yes -- if you could enable connecting only to Google Cloud SQL PostreSQL databases that would be enough for my team.
- Tim
p.s. On the hope (...no clue as to the reality...) that the following suggestion might be more feasible to implement than another JDBC-driver-like implementation, I do wonder if it might be worth considering a new Apps Script interface for PostgreSQL, closer to that provided byhttps://node-postgres.com/api/client . I'd personally prefer that for a number of reasons, although certain JDBC is acceptable. Apology in advance if this opens a can of worms.
Thanks for re-opening!
To specifically answer the question: yes -- if you could enable connecting only to Google Cloud SQL PostreSQL databases that would be enough for my team.
- Tim
p.s. On the hope (...no clue as to the reality...) that the following suggestion might be more feasible to implement than another JDBC-driver-like implementation, I do wonder if it might be worth considering a new Apps Script interface for PostgreSQL, closer to that provided by
mi...@gmail.com <mi...@gmail.com> #65
GCP has Postgre as GA and Data Studio has a connector to Postgre. So Google
already has a way of connecting to Postgre. Moreover, App Script uses JDBC
(I guess it is a facade over Java being behind) and there is no dearth of
drivers in Java to connect to Postgre.
I see the implementation (or non-implementation) is more strategic than
technical known only to Google.
There are times I have moved away from Apps Script as the existing database
was in Postgres.
I believe Google will eventually implement Postgre support in Apps Script,
the sooner is better for all.
thanks
Michaes
On Fri, May 25, 2018 at 10:44 PM, <buganizer-system@google.com> wrote:
already has a way of connecting to Postgre. Moreover, App Script uses JDBC
(I guess it is a facade over Java being behind) and there is no dearth of
drivers in Java to connect to Postgre.
I see the implementation (or non-implementation) is more strategic than
technical known only to Google.
There are times I have moved away from Apps Script as the existing database
was in Postgres.
I believe Google will eventually implement Postgre support in Apps Script,
the sooner is better for all.
thanks
Michaes
On Fri, May 25, 2018 at 10:44 PM, <buganizer-system@google.com> wrote:
jo...@greatworx.com <jo...@greatworx.com> #66
Please consider supporting this. PostgreSQL is used everywhere in my org, and I'd love to use it from Google Apps Script. And yes, initially, it'd be fine to just support Google Cloud SQL PostreSQL...at least in my case. However, would love to see full support in the future.
Thanks!
Thanks!
lu...@gorilainvest.com.br <lu...@gorilainvest.com.br> #67
In my case, my organisation's PostgreSQL is hosted in AWS so I would be very much interested in full support of any PostgreSQL DB.
[Deleted User] <[Deleted User]> #68
it's been like 7 years google cmon. It shouldn't be that hard for you guys
di...@gmail.com <di...@gmail.com> #69
Google has to want to do this. Clearly they do not.
On Mon, Oct 22, 2018, 4:49 AM , <buganizer-system@google.com> wrote:
On Mon, Oct 22, 2018, 4:49 AM , <buganizer-system@google.com> wrote:
de...@gmail.com <de...@gmail.com> #70
wow seems that this is not an option, how can they have mysql before postgres?
some internal business that did not go well xD
some internal business that did not go well xD
[Deleted User] <[Deleted User]> #71
Google needs to get with the times. PostgreSQL is now the standard for a lot of enterprise applications running on AWS. There is little incentive in using any of Google's products if we can't use Apps Script to connect to PostgreSQL
[Deleted User] <[Deleted User]> #72
any update on this?
ek...@google.com <ek...@google.com> #73
Sorry, no updates at this time.
ja...@gmail.com <ja...@gmail.com> #74
Eight years guys. Having postgres support at the very least within Cloud would be really helpful.
[Deleted User] <[Deleted User]> #75
wow, I have just made a business case to get a postgres instance on Google Cloud Sql to just find out that I cannot make a application using Google Script to interact with this service? Is this even real? am I missing something?
[Deleted User] <[Deleted User]> #76
Could anyone point me in the right direction to make a connection to Cloud Sql Postgres instance via Google Scripts? PLEASE I need HELP Quick!!!
mr...@gmail.com <mr...@gmail.com> #77
Perhaps sheets + postgresql? https://zapier.com/apps/google-sheets/integrations/postgresql
But yeah, it's ridiculous.. google's ecosystem isn't user friendly at all.
But yeah, it's ridiculous.. google's ecosystem isn't user friendly at all.
[Deleted User] <[Deleted User]> #78
@mr...@gmail.com
thanks, but that is not what I`m really looking for.
thanks, but that is not what I`m really looking for.
mi...@gmail.com <mi...@gmail.com> #79
Google have really let themselves and their users down on this issue!
[Deleted User] <[Deleted User]> #80
I would appreciate if someone please could confirm if thats really the case that there is no way to directly make a connection/interact between Google Cloud Sql Postgres Instance and Google Scripts services?
I will have to re-design my project in a huge way if the answer to my question is YES (Unfortunately)
Many thanks in advance!
I will have to re-design my project in a huge way if the answer to my question is YES (Unfortunately)
Many thanks in advance!
mr...@gmail.com <mr...@gmail.com> #81
Perhaps describe what you're looking to achieve in case there are alternatives. I've come across this problem before and looks like there's no direct way to communicate between postgresql & google scripts.
[Deleted User] <[Deleted User]> #82
I was tasked to re-write google script (web app) which ironically happened because one of the google service is being completely switched off in few months time. Current application is using a lot of spreadsheets as a back end so my first thought was to move it to proper DB, after a proper research I got an impression that for this, really need a Cloud SQL (relational database) even thought business seems to got stuck on BigQuery (BigTime - I mean for everything!), taking under consideration the fact that I hit the wall (very disappointed with google) need to consider: BigQuery as a back end or MySql instance in Cloud Sql.
I am extremely disappointed in google and dont really understand how company of this size can afford to propose a service without basic integration with other services especially comparing it to MySql instance (double standards?)
Thank you all for your input on this matter.
I am extremely disappointed in google and dont really understand how company of this size can afford to propose a service without basic integration with other services especially comparing it to MySql instance (double standards?)
Thank you all for your input on this matter.
mi...@jivrus.com <mi...@jivrus.com> #83
Hi All,
Unfortunately, there is NO support from connecting the PostgreSQL database from Google Apps Script (though Data Studio can connect to PostgreSQL). I would be extremely happy if someone proves this wrong. Reason being - many of our customers (and ourselves) wanted a way to directly pull data from databases into Google Sheets. We built the Database Browser add-on product (https://databasebrowser.jivrus.com/ ) with a mission to query/ edit any database from Google Sheets. We succeeded to maximum level interfacing with MySQL, GCloud SQL, Oracle, MS SQL Server, BigQuery, Firestore etc except PostgreSQL. We have been following this issue (raised in 2011). But no solution yet.
Clearly, if Data Studio can connect to PostgreSQL, then Apps Script should be able to connect, but it is NOT. Google just does not want this feature to be available to Apps Script for unknown political reason (there is nothing technical here AFAIK).
Google, we like your products; the stranded customers who have chosen G Suite and PostgreSQL will also like you EVEN MORE if you let Apps Script connect to PostgreSQL directly. Hope you would do that soon.
cheers
Michaes
https://www.jivrus.com
Unfortunately, there is NO support from connecting the PostgreSQL database from Google Apps Script (though Data Studio can connect to PostgreSQL). I would be extremely happy if someone proves this wrong. Reason being - many of our customers (and ourselves) wanted a way to directly pull data from databases into Google Sheets. We built the Database Browser add-on product (
Clearly, if Data Studio can connect to PostgreSQL, then Apps Script should be able to connect, but it is NOT. Google just does not want this feature to be available to Apps Script for unknown political reason (there is nothing technical here AFAIK).
Google, we like your products; the stranded customers who have chosen G Suite and PostgreSQL will also like you EVEN MORE if you let Apps Script connect to PostgreSQL directly. Hope you would do that soon.
cheers
Michaes
ts...@gmail.com <ts...@gmail.com> #84
This deserves a response from Google for no other reason than the Cloud SQL for PostgreSQL FAQ (https://cloud.google.com/sql/docs/postgres/faq ) EXPLICITLY states that accessing Cloud SQL PostgreSQL IS possible using JDBC.
Q: Can I access my Cloud SQL instance programmatically outside of App Engine?
A: Yes. You can access Cloud SQL instances programmatically from external applications using any supported language. You can also connect using JDBC, including writing Apps Script scripts to access your Cloud SQL databases.
To reiterate: ...including Apps Script...
Google? Can you PLEASE confirm or deny that this is possible? Is the FAQ correct or incorrect?
Q: Can I access my Cloud SQL instance programmatically outside of App Engine?
A: Yes. You can access Cloud SQL instances programmatically from external applications using any supported language. You can also connect using JDBC, including writing Apps Script scripts to access your Cloud SQL databases.
To reiterate: ...including Apps Script...
Google? Can you PLEASE confirm or deny that this is possible? Is the FAQ correct or incorrect?
di...@gmail.com <di...@gmail.com> #85
jeez...they have done nothing about this since i first started this request
in 2011. considering their response for the past eight years, i think you
have your answer.
On Sun, Mar 10, 2019 at 11:25 PM <buganizer-system@google.com> wrote:
in 2011. considering their response for the past eight years, i think you
have your answer.
On Sun, Mar 10, 2019 at 11:25 PM <buganizer-system@google.com> wrote:
ek...@google.com <ek...@google.com> #86
@Tim York - I (Googler) have responded to this issue before, but always with the unfortunate news that there hasn't been progress. Good catch on that documentation though, it is completely wrong. It looks like it was copied from the corresponding MySQL docs (https://cloud.google.com/sql/docs/mysql/faq#externaldev ) and wasn't updated. I'll raise that with the team to get it corrected.
ts...@gmail.com <ts...@gmail.com> #87
Thanks anyway for the response. Unfortunately, catching the documentation error sent me down a 2-day path that is now completely worthless.
Can you at least say whether it's something that is being worked on or exists somewhere on a roadmap? No dates needed. It would just be nice to know you guys recognize it a serious functionality gap that needs closure. You're giving Microsoft way too much time to catch up.
Can you at least say whether it's something that is being worked on or exists somewhere on a roadmap? No dates needed. It would just be nice to know you guys recognize it a serious functionality gap that needs closure. You're giving Microsoft way too much time to catch up.
mr...@gmail.com <mr...@gmail.com> #88
Would https://github.com/PostgREST/postgrest work for you? Since google recommends an api layer for now
ts...@gmail.com <ts...@gmail.com> #89
I'll check it out. Thanks for the suggestion!
ek...@google.com <ek...@google.com> #90
@Tim York, adding PostgreSQL support is not being actively worked on nor is it on a roadmap.
d....@cleanchemi.com <d....@cleanchemi.com> #91
- When using Google Data Studio it is possible to connect to a postgres database, but not SQL Server.
- With Apps Script, it is possible to connect to a SQL Server database, but not postgres.
This is a little frustrating & makes it difficult to choose a database that will interact well with Google products.
- With Apps Script, it is possible to connect to a SQL Server database, but not postgres.
This is a little frustrating & makes it difficult to choose a database that will interact well with Google products.
mr...@gmail.com <mr...@gmail.com> #92
It's very frustrating dealing with some of google's services, they definitely don't make it easy to use, from google home to google apis. Too many PhDs are not great with usability it seems. They're also acquiring companies/projects(BigQuery) that goes against the interest of pursuing widely accepted DBMS such as postgresql. They're allocating resources where they feel they can earn most while ignoring foundational features.
It's like they've acquired NEST and not really focusing energy to provide support for ECO Bee (Superior product) for their google home.. at least in a timely fashion. They need a more democratic approach to these things.
It's like they've acquired NEST and not really focusing energy to provide support for ECO Bee (Superior product) for their google home.. at least in a timely fashion. They need a more democratic approach to these things.
mi...@jivrus.com <mi...@jivrus.com> #93
This is a simple ask from PostgreSQL users for the last 9 years. NINE YEARS!!! come on Google!!!
vi...@petlove.com.br <vi...@petlove.com.br> #94
9 years and nothing... :(
[Deleted User] <[Deleted User]> #95
Sad, Still nothing!!
[Deleted User] <[Deleted User]> #96
Come on google, sort it out. Shouldn't be rocket science and has been in the works for what.. 9 years?
Heaps of people use postgres. We wanna connect it to awesome google sheets. Throw us a bone!
Heaps of people use postgres. We wanna connect it to awesome google sheets. Throw us a bone!
fe...@gmail.com <fe...@gmail.com> #97
+ 1
cg...@gmail.com <cg...@gmail.com> #98
I'm also super surprised there's no postgres integration. +1 for this feature.
[Deleted User] <[Deleted User]> #99
+1
ar...@gmail.com <ar...@gmail.com> #100
Google team please add support for Postgres to JDBC, it is extremely lame that action on this hasn't been taken even when so many developers require it. Please take this up in priority. Postgres has become a standard now.
ro...@kabiev.com <ro...@kabiev.com> #101
+1000
[Deleted User] <[Deleted User]> #102
yes, this should be supported and easy
[Deleted User] <[Deleted User]> #103
if there is a plan to support it, can a timeframe be publicized please ...
ge...@gmail.com <ge...@gmail.com> #104
+1
di...@gmail.com <di...@gmail.com> #105
st...@ordoxy.com <st...@ordoxy.com> #106
+1000
ek...@google.com <ek...@google.com>
ma...@booksy.com <ma...@booksy.com> #107
+10000000000
[Deleted User] <[Deleted User]> #108
+1
sh...@gmail.com <sh...@gmail.com> #109
+1
ga...@gmail.com <ga...@gmail.com> #110
+1
th...@inspira.com.br <th...@inspira.com.br> #111
plz!
ga...@incentfit.com <ga...@incentfit.com> #112
+1
[Deleted User] <[Deleted User]> #113
come on google.
at...@celebrationtravelgroup.com <at...@celebrationtravelgroup.com> #114
+1
ru...@kromco.co.za <ru...@kromco.co.za> #115
+1
e8...@gmail.com <e8...@gmail.com> #116
+1
co...@outlook.com <co...@outlook.com> #117
+1
ab...@gmail.com <ab...@gmail.com> #118
+1
fd...@gmail.com <fd...@gmail.com> #119
+1
[Deleted User] <[Deleted User]> #120
I'll throw my name in the hat for this one. +1
ku...@gmail.com <ku...@gmail.com> #121
+1
[Deleted User] <[Deleted User]> #122
+1
[Deleted User] <[Deleted User]> #123
+1 please!!
[Deleted User] <[Deleted User]> #124
+1 come on, guys.....
lo...@gmail.com <lo...@gmail.com> #125
+1
ac...@gmail.com <ac...@gmail.com> #126
+10000000
ey...@wediditsolutions.com <ey...@wediditsolutions.com> #127
+1
[Deleted User] <[Deleted User]> #128
+1
pe...@gmail.com <pe...@gmail.com> #129
+1
fe...@gmail.com <fe...@gmail.com> #130
+1
[Deleted User] <[Deleted User]> #131
+1
How many times do we need to ask for this quite simple thing to do (I guess no more than driver compilation on the java wrapper) ?
How many times do we need to ask for this quite simple thing to do (I guess no more than driver compilation on the java wrapper) ?
em...@xertica.com <em...@xertica.com> #132
When will it available this option?
I need this option
+1
I need this option
+1
se...@lilyandfox.com <se...@lilyandfox.com> #133
+1. This is insane that this isn't possible.
Perhaps better to move to a more supportive infrastructure provider?
Perhaps better to move to a more supportive infrastructure provider?
ni...@gmail.com <ni...@gmail.com> #134
+1
[Deleted User] <[Deleted User]> #135
+1
gl...@adaptavist.com <gl...@adaptavist.com> #136
+1
k....@gmail.com <k....@gmail.com> #137
+1
vf...@zalando.de <vf...@zalando.de> #138
+1
da...@kpitaine.com <da...@kpitaine.com> #139
+ 10000 !
ni...@arigasolutions.fr <ni...@arigasolutions.fr> #140
+1
dj...@gmail.com <dj...@gmail.com> #141
Still nothing here? Is a bit weird...
po...@gmail.com <po...@gmail.com> #142
+1
ed...@simplificaenergia.com.br <ed...@simplificaenergia.com.br> #143
+1
sa...@relfor.com <sa...@relfor.com> #144
+1
hd...@ucsc.edu <hd...@ucsc.edu> #145
Fixing this issue would facilitate a lot of good work! +1
be...@gram.gs <be...@gram.gs> #146
+1
ro...@gmail.com <ro...@gmail.com> #147
+1
di...@smiledu.com <di...@smiledu.com> #148
PLEASE GOOGLE!!! it's been 9 years ;(
jr...@gmail.com <jr...@gmail.com> #149
+0
[Deleted User] <[Deleted User]> #150
+1
je...@convoicar.fr <je...@convoicar.fr> #151
+1
ca...@xlz.com.br <ca...@xlz.com.br> #152
+1
mi...@jivrus.com <mi...@jivrus.com> #153
After waiting for a long time for our add-on Database Browser, we ended up developing a private interface through Google Cloud Platform to connect with PostgreSQL, the latest version of MySQL, MongoDB, DynamoDB etc in our Database Browser add-on to pull/push data to Google Sheets.
Could have been easier if Apps Script supported this natively
Could have been easier if Apps Script supported this natively
[Deleted User] <[Deleted User]> #154
Please make it possible, which could be the reason to not support Postgresql JBDC connection? I would be very useful, just the same way there is mysql JBDC connection...
na...@gmail.com <na...@gmail.com> #155
10 years from the first request, and still nothing at all :(
ed...@gmail.com <ed...@gmail.com> #156
+1
s....@gmail.com <s....@gmail.com> #157
+1
pi...@gmail.com <pi...@gmail.com> #158
+1
[Deleted User] <[Deleted User]> #159
Wow, I can't believe it's been 10 years for such a small feature and still no sign of update from Google. Shame.
se...@gmail.com <se...@gmail.com> #160
it's been 10 years, and still no matter.
+1 PLEASE
+1 PLEASE
sh...@gmail.com <sh...@gmail.com> #161
Comment has been deleted.
mu...@ambar.tech <mu...@ambar.tech> #162
+1
s....@gmail.com <s....@gmail.com> #163
+1!!!
vo...@gfrc.cl <vo...@gfrc.cl> #164
+1 !!!!! up!!
da...@teachme2.com <da...@teachme2.com> #165
+1 Come on, this can't be that difficult
re...@yahoo.com <re...@yahoo.com> #166
+1
dr...@gmail.com <dr...@gmail.com> #167
This is horrible, an issue lasting for 10 years and ignored.
pt...@gmail.com <pt...@gmail.com> #168
+1
os...@gmail.com <os...@gmail.com> #169
+1
jm...@skillsunlimitedinc.org <jm...@skillsunlimitedinc.org> #170
this is ridiculous. you guys asleep or what?
ia...@gmail.com <ia...@gmail.com> #171
+1
jr...@gmail.com <jr...@gmail.com> #172
hello??
lu...@gmail.com <lu...@gmail.com> #173
+1
os...@gmail.com <os...@gmail.com> #174
+1
ga...@gmail.com <ga...@gmail.com> #175
+1
ol...@gmail.com <ol...@gmail.com> #176
+1
da...@veri.co <da...@veri.co> #177
Comment has been deleted.
de...@gmail.com <de...@gmail.com> #178
+1
se...@lilyandfox.com <se...@lilyandfox.com> #179
+1
ma...@gmail.com <ma...@gmail.com> #180
+1
aw...@gmail.com <aw...@gmail.com> #181
+1
se...@global-mobility-service.com <se...@global-mobility-service.com> #182
+1
[Deleted User] <[Deleted User]> #183
+1
di...@dialecticanet.com <di...@dialecticanet.com> #184
+1
ju...@gmail.com <ju...@gmail.com> #185
+1
gf...@gmail.com <gf...@gmail.com> #186
+1
[Deleted User] <[Deleted User]> #187
Please update
[Deleted User] <[Deleted User]> #188
+1
ma...@uaw4121.org <ma...@uaw4121.org> #189
When I first saw that this was a decade old issue, I figured by the time I scrolled down here, it'd be fixed. I can't believe this..
no...@jadynekena.com <no...@jadynekena.com> #190
no news 10 years later ? omg
mu...@ambar.tech <mu...@ambar.tech> #191
+1
[Deleted User] <[Deleted User]> #192
+1 pls
pa...@ame.g4s.com <pa...@ame.g4s.com> #193
+1 pls
how come Priority is P2 and its not a bug
how come Priority is P2 and its not a bug
mi...@rata.id <mi...@rata.id> #194
+1
just tried 3 days ago and found this while looking for the answer. Can't believe it's been 11 years and still not going anywhere
just tried 3 days ago and found this while looking for the answer. Can't believe it's been 11 years and still not going anywhere
sc...@gmail.com <sc...@gmail.com> #195
no way! hahah this started in 2011. That's wild.
li...@gmail.com <li...@gmail.com> #196
+1
is...@gmail.com <is...@gmail.com> #197
+1
ep...@goseamless.co.za <ep...@goseamless.co.za> #198
+1
se...@gmail.com <se...@gmail.com> #199
Please fix this
ma...@arcca.io <ma...@arcca.io> #200
+1000000000000000000000000000000000000000000000
tr...@gmail.com <tr...@gmail.com> #201
+1
du...@gmail.com <du...@gmail.com> #202
+1
al...@agileengine.com <al...@agileengine.com> #203
+1
le...@gmail.com <le...@gmail.com> #204
+1
s....@gmail.com <s....@gmail.com> #205
+1
me...@fabiobernasconi.com <me...@fabiobernasconi.com> #206
Comment has been deleted.
ka...@gmail.com <ka...@gmail.com> #207
+1
ar...@gmail.com <ar...@gmail.com> #208
+1
pa...@ucrop.it <pa...@ucrop.it> #209
+1
[Deleted User] <[Deleted User]> #210
+1
sa...@dexterity.ai <sa...@dexterity.ai> #211
This ticket is living proof that Google has abandoned this project. Use twitter to let them know they need to officially EOL the project.
xe...@gmail.com <xe...@gmail.com> #212
+1
ju...@gmail.com <ju...@gmail.com> #213
+1
ar...@anybuddyapp.com <ar...@anybuddyapp.com> #214
+1
fe...@winclap.com <fe...@winclap.com> #215
+1
ah...@gmail.com <ah...@gmail.com> #216
+1
ja...@gmail.com <ja...@gmail.com> #217
+1
te...@gmail.com <te...@gmail.com> #218
+1
[Deleted User] <[Deleted User]> #219
+1
be...@gmail.com <be...@gmail.com> #220
how tf has pg support still not been added to jdbc.....
help us google gemini, you're our only hope
help us google gemini, you're our only hope
mi...@valoriansolutions.com <mi...@valoriansolutions.com> #221
Every. Single. Time. I try to use Google for something useful, there is a dead end... in addition to the constant worry that they'll kill it.
te...@gmail.com <te...@gmail.com> #222
+1
lu...@gmail.com <lu...@gmail.com> #223
+1
yi...@gmail.com <yi...@gmail.com> #224
+1
da...@copilotinnovations.com <da...@copilotinnovations.com> #225
+1
Description
JDBC support for postgresql, please.
Has this happened, or is it going to happen?
When I try to connect using the following:
Jdbc.getConnection("jdbc:postgresql://ip:port/database", "user", "pass")
I get an error:
'Connection URL uses an unsupported JDBC protocol.'
Thanks