Obsolete
Status Update
Comments
de...@gmail.com <de...@gmail.com> #2
As a corollary, for properties where unique=True one
should be able to fetch an entity by doing
Model.get_by_uniquepropertyname and also
Model.get_or_insert_by_uniquepropertyname... like Rails does with
ActiveRecord (e.g., Book.find_or_create_by_isbn).
should be able to fetch an entity by doing
Model.get_by_uniquepropertyname and also
Model.get_or_insert_by_uniquepropertyname... like Rails does with
ActiveRecord (e.g., Book.find_or_create_by_isbn).
wi...@gmail.com <wi...@gmail.com> #3
I'd rather see this added as an index feature, so the uniqueness could be spread
across multiple fields (and an index would almost certainly be needed to support this
efficiently, anyway).
across multiple fields (and an index would almost certainly be needed to support this
efficiently, anyway).
jo...@gmail.com <jo...@gmail.com> #4
I totally agree that this should be implemented as an index. It could (and I think
should) still be specified in the schema/property constructor and the index
automatically generated.
should) still be specified in the schema/property constructor and the index
automatically generated.
de...@gmail.com <de...@gmail.com> #5
It would also be great to have a way, in the schema declarations, to specify
composite unique indexes/properties, especially for reference properties.
composite unique indexes/properties, especially for reference properties.
ra...@gmail.com <ra...@gmail.com> #6
Yes, this would be super useful.
ma...@hotmail.com <ma...@hotmail.com> #7
Yes, this is a very common usecase +1
xi...@gmail.com <xi...@gmail.com> #8
I need this feature.
ac...@gmail.com <ac...@gmail.com> #9
Can't this be done with a carefully contstructed key name?
no...@gmail.com <no...@gmail.com> #10
No. It can't be done with a carefully constructed key name... because the key name is
immutable once it is set at creation. For example, say you have a User model with an
email address in it. You want the email address to be unique across all User
entities. If you use the email address as key name you are out of luck when the user
changes his/her email.
immutable once it is set at creation. For example, say you have a User model with an
email address in it. You want the email address to be unique across all User
entities. If you use the email address as key name you are out of luck when the user
changes his/her email.
ra...@gmail.com <ra...@gmail.com> #11
no i don't think so, because keys are used to reference objects, so if you change the
key you have to update all referencing properties.
uniqueness is not the same use-case as a primary key
key you have to update all referencing properties.
uniqueness is not the same use-case as a primary key
ma...@gmail.com <ma...@gmail.com> #12
hi all! we've discussed this a bit internally, and we agree, it would be a useful
feature. it's definitely one of the most widely used constraints in SQL schemas.
unfortunately, it would be pretty difficult to implement with the current datastore
architecture, due in large part to our index consistency model, described in
http://code.google.com/appengine/articles/transaction_isolation.html . we might have
to chalk this up as another place the datastore doesn't provide traditional
relational database features.
at first, the key_name approach seemed tempting to me too. unfortunately, as edoardo
and bernddorn mention, it effectively makes those properties read only. having said
that, i suspect that "unique" properties like these actually are often read only, so
that might not be quite as big a drawback.
out of curiosity, would this feature be useful to anyone if it was limited to entity
groups? ie, you could specify that property values must be unique within each
individual entity group, but they could be repeated across entity groups?
feature. it's definitely one of the most widely used constraints in SQL schemas.
unfortunately, it would be pretty difficult to implement with the current datastore
architecture, due in large part to our index consistency model, described in
to chalk this up as another place the datastore doesn't provide traditional
relational database features.
at first, the key_name approach seemed tempting to me too. unfortunately, as edoardo
and bernddorn mention, it effectively makes those properties read only. having said
that, i suspect that "unique" properties like these actually are often read only, so
that might not be quite as big a drawback.
out of curiosity, would this feature be useful to anyone if it was limited to entity
groups? ie, you could specify that property values must be unique within each
individual entity group, but they could be repeated across entity groups?
an...@gmail.com <an...@gmail.com> #13
ryanb: even properties that are thought to be immutable/readonly (like unique
identifiers, say ISSN/ISBN etc) may actually change in the future (and they did).
That is why, in general, it is a very bad idea to use properties as key_names (that's
the all thinking behind using a simple integer as primary key in relational database
design). Moreover, there are unique properties (say email addresses) that you want
unique and are most likely to change.
The correct way of implementing this would be to have the unique constraint apply
within all entities of a specific KIND, not just an entity group as you suggested.
Another way of going at it would be to be able to query the datastore within a
transaction.
identifiers, say ISSN/ISBN etc) may actually change in the future (and they did).
That is why, in general, it is a very bad idea to use properties as key_names (that's
the all thinking behind using a simple integer as primary key in relational database
design). Moreover, there are unique properties (say email addresses) that you want
unique and are most likely to change.
The correct way of implementing this would be to have the unique constraint apply
within all entities of a specific KIND, not just an entity group as you suggested.
Another way of going at it would be to be able to query the datastore within a
transaction.
fj...@gmail.com <fj...@gmail.com> #14
for me this would be totally ok, afaik an entity-group is needed anyways to check
uniqueness in a transaction safe manner, isnt it?
even though in my applications i do not check uniqueness transaction safe. i thought
that if this would happen on the datastore end it would be possible implement it on
the storage/index level, which would make it much less error prune than doing it on
the application level via properties.
uniqueness in a transaction safe manner, isnt it?
even though in my applications i do not check uniqueness transaction safe. i thought
that if this would happen on the datastore end it would be possible implement it on
the storage/index level, which would make it much less error prune than doing it on
the application level via properties.
fo...@gmail.com <fo...@gmail.com> #15
Transaction isn't needed to guarantee a property is unique. Do a put and followed by
a query on that property, if count is greater than 1, delete self. If you worry about
exception, you can mark one property of the new object to indicate it's new and flip
it once verified unique. If there's contention of put, both object should delete them
self, there shouldn't be 2 object both marked old but with same property value.
Of course solve this on index level is much efficient.
a query on that property, if count is greater than 1, delete self. If you worry about
exception, you can mark one property of the new object to indicate it's new and flip
it once verified unique. If there's contention of put, both object should delete them
self, there shouldn't be 2 object both marked old but with same property value.
Of course solve this on index level is much efficient.
xd...@gmail.com <xd...@gmail.com> #16
question: Are entity ID's guaranteed to be unique?
If so, would it be possible to create a property that does something like:
entity.index = eventual_id
instead of:
entity.put()
entity.index = entity.key().id()
If so, would it be possible to create a property that does something like:
entity.index = eventual_id
instead of:
entity.put()
entity.index = entity.key().id()
ma...@gmail.com <ma...@gmail.com> #17
[Comment deleted]
je...@gmail.com <je...@gmail.com> #18
generally, no, IDs are not unique. however, they will be unique for entities with the
same kind and the same parent, or root entities of the same kind.
in general, the full path is the only thing that's always guaranteed to be unique.
see
http://code.google.com/appengine/docs/datastore/keysandentitygroups.html#Paths_and_Key_Uniqueness
.
same kind and the same parent, or root entities of the same kind.
in general, the full path is the only thing that's always guaranteed to be unique.
see
.
sa...@gmail.com <sa...@gmail.com> #19
I think this "compromise" is pretty much useless. The whole point of having unique
entities is to have some slightly more complex guarantees about consistency than what
you currently provide with key_name. Is this feature really impossible to implement
in a scalable way or is this a specific problem of bigtable or the datastore?
What about allowing to safely change the key_name? Could you somehow allow for
locking two entity groups at the same time? If a deadlock occurs you could just
revert one of the transactions and let the other continue to run. This might need
some more thought though. :)
entities is to have some slightly more complex guarantees about consistency than what
you currently provide with key_name. Is this feature really impossible to implement
in a scalable way or is this a specific problem of bigtable or the datastore?
What about allowing to safely change the key_name? Could you somehow allow for
locking two entity groups at the same time? If a deadlock occurs you could just
revert one of the transactions and let the other continue to run. This might need
some more thought though. :)
[Deleted User] <[Deleted User]> #20
I ran into this and between a thread in the groups and this ticket, it's been
helpful. I did come up with a solution for my specific use case. I'm using a floating
point number as a score and need it to be unique because I am paging off of the
value. I also needed it to not be immutable, because scores can change. What I did
was override the put method, with a check to confirm the value being unique and if
not, then adjusted it and tried again.
def put(object):
valid = False
while valid == False:
cquery = db.GqlQuery('SELECT * FROM Story WHERE score = :1', object.score)
results = cquery.fetch(1)
if len(results) > 0:
value = value + .01
else:
valid = True
super(Story, Story).put(object)
One thing that might be useful from the appengine standpoint would be to add an
attribute similar to validator for properties, like unique_validator = None, which
could then be a hook for developers to create their own functions. Or if you needed a
more generic function that could raise an exception, it could be something similar to
the above, except it would raise an exception on the check for an existing value, and
it could then be up to the developer to catch the exception and adjust the value
accordingly before reattempting the put operation. Then you could just have a unique
= True/False attribute.
helpful. I did come up with a solution for my specific use case. I'm using a floating
point number as a score and need it to be unique because I am paging off of the
value. I also needed it to not be immutable, because scores can change. What I did
was override the put method, with a check to confirm the value being unique and if
not, then adjusted it and tried again.
def put(object):
valid = False
while valid == False:
cquery = db.GqlQuery('SELECT * FROM Story WHERE score = :1', object.score)
results = cquery.fetch(1)
if len(results) > 0:
value = value + .01
else:
valid = True
super(Story, Story).put(object)
One thing that might be useful from the appengine standpoint would be to add an
attribute similar to validator for properties, like unique_validator = None, which
could then be a hook for developers to create their own functions. Or if you needed a
more generic function that could raise an exception, it could be something similar to
the above, except it would raise an exception on the check for an existing value, and
it could then be up to the developer to catch the exception and adjust the value
accordingly before reattempting the put operation. Then you could just have a unique
= True/False attribute.
en...@google.com <en...@google.com>
ke...@kerrickstaley.com <ke...@kerrickstaley.com> #21
In JPA (Java) there is an annotation @UniqueConstraint [1] that can be put on the
class-level table annotation to identify which properties should be unique (either
alone or alongside others). This is usually used in addition to property-level
annotations, which have that unique=True/False property.
Before calling put(), the information from the annotation (or a simple property
_unique_constraints) is used to issue some queries, and, if no constraints are
violated, the entity is stored. If any constraint is violated an appropriate error
can be raised.
I have implemented a very basic version of my suggestion, which is enough for what I
need:
## start
class DbEntity(db.Model):
_unique_properties = None
timestamp = db.DateTimeProperty(auto_now=True)
def __init__(self, parent=None, key_name=None, _app=None, unique_properties = None,
**kwds):
super(DbEntity, self).__init__(parent, key_name, _app, **kwds)
if unique_properties:
logging.debug("Unique properties: %s" % unique_properties)
self._unique_properties = unique_properties
def put(self):
if self._unique_properties:
logging.debug('checking unique properties for : %s ...' % self.__class__)
for unique in self._unique_properties:
gqlString = 'WHERE %s = :1' % unique
param = eval('self.%s' % unique)
logging.info ('GQL: self.gql("%s", %s)' % (gqlString, param))
query = self.gql(gqlString, eval('self.%s' % unique))
otherObjects = query.fetch(10)
if len(otherObjects) > 0:
logging.error("Objects that violate the constraints: %s" % otherObjects)
raise db.BadPropertyError("Other instances of %s exist with %s = %s" %
(self.__class__, unique, param))
return super(DbEntity, self).put()
class Team(DbEntity):
name = db.StringProperty(required=True)
def __init__(self, parent=None, key_name=None, _app=None, **kwds):
# set the unique property names
super(Team, self).__init__(unique_properties = ['name'], **kwds)
## end
[1]http://java.sun.com/javaee/5/docs/api/javax/persistence/UniqueConstraint.html
class-level table annotation to identify which properties should be unique (either
alone or alongside others). This is usually used in addition to property-level
annotations, which have that unique=True/False property.
Before calling put(), the information from the annotation (or a simple property
_unique_constraints) is used to issue some queries, and, if no constraints are
violated, the entity is stored. If any constraint is violated an appropriate error
can be raised.
I have implemented a very basic version of my suggestion, which is enough for what I
need:
## start
class DbEntity(db.Model):
_unique_properties = None
timestamp = db.DateTimeProperty(auto_now=True)
def __init__(self, parent=None, key_name=None, _app=None, unique_properties = None,
**kwds):
super(DbEntity, self).__init__(parent, key_name, _app, **kwds)
if unique_properties:
logging.debug("Unique properties: %s" % unique_properties)
self._unique_properties = unique_properties
def put(self):
if self._unique_properties:
logging.debug('checking unique properties for : %s ...' % self.__class__)
for unique in self._unique_properties:
gqlString = 'WHERE %s = :1' % unique
param = eval('self.%s' % unique)
query = self.gql(gqlString, eval('self.%s' % unique))
otherObjects = query.fetch(10)
if len(otherObjects) > 0:
logging.error("Objects that violate the constraints: %s" % otherObjects)
raise db.BadPropertyError("Other instances of %s exist with %s = %s" %
(self.__class__, unique, param))
return super(DbEntity, self).put()
class Team(DbEntity):
name = db.StringProperty(required=True)
def __init__(self, parent=None, key_name=None, _app=None, **kwds):
# set the unique property names
super(Team, self).__init__(unique_properties = ['name'], **kwds)
## end
[1]
jo...@gmail.com <jo...@gmail.com> #22
re: Comment 23
This code contains a race condition and will not ensure uniqueness (it should, most
of the time, but there's no uniqueness guarantee).
This code contains a race condition and will not ensure uniqueness (it should, most
of the time, but there's no uniqueness guarantee).
bo...@gmail.com <bo...@gmail.com> #23
sorry my bad english but ehy me get 2 insert when me wait 1
Description
An application that subscribes to preview frames
and then calls Camera.release()
- What happened.
Looks like a race-condition.
Camera preview buffers are passed to the application as messages, so its
asynchronous. If a preview buffer happens to be processed *after* the
camera has been released, and there is a preview callback assigned, then
then an exception is thrown and the thread dies.
Stacktrace:
D/AndroidRuntime( 3404): Shutting down VM
W/dalvikvm( 3404): threadid=3: thread exiting with uncaught exception
(group=0x4001b178)
E/AndroidRuntime( 3404): Uncaught handler: thread main exiting due to
uncaught exception
E/AndroidRuntime( 3404): java.lang.RuntimeException: Method called after
release()
E/AndroidRuntime( 3404): at
android.hardware.Camera.setHasPreviewCallback(Native Method)
E/AndroidRuntime( 3404): at android.hardware.Camera.access$600(Camera.java:58)
E/AndroidRuntime( 3404): at
android.hardware.Camera$EventHandler.handleMessage(Camera.java:331)
E/AndroidRuntime( 3404): at
android.os.Handler.dispatchMessage(Handler.java:99)
E/AndroidRuntime( 3404): at android.os.Looper.loop(Looper.java:123)
E/AndroidRuntime( 3404): at
android.app.ActivityThread.main(ActivityThread.java:4363)
E/AndroidRuntime( 3404): at java.lang.reflect.Method.invokeNative(Native
Method)
E/AndroidRuntime( 3404): at java.lang.reflect.Method.invoke(Method.java:521)
E/AndroidRuntime( 3404): at
com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860)
E/AndroidRuntime( 3404): at
com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618)
E/AndroidRuntime( 3404): at dalvik.system.NativeStart.main(Native Method)
I/Process ( 71): Sending signal. PID: 3404 SIG: 3
I/dalvikvm( 3404): threadid=7: reacting to signal 3
- What you think the correct behavior should be.
The camera preview handler should discard preview frames it gets when the
camera has been released.
This keeps happening to my users on Nexus 1; have not reproduced on other
phones (but the issue may of course still be there).