mongomotor package

mongomotor.connect(db=None, async_framework='asyncio', alias='default', **kwargs)

Connect to the database specified by the ‘db’ argument.

Connection settings may be provided here as well if the database is not running on the default port on localhost. If authentication is needed, provide username and password arguments as well.

Multiple databases are supported by using aliases. Provide a separate alias to connect to a different instance of mongod.

Parameters are the same as for mongoengine.connection.connect() plus one:

Parameters:async_framework – Which asynchronous framework should be used. It can be tornado or asyncio. Defaults to asyncio.
mongomotor.disconnect(alias='default')

Disconnects from the database indentified by alias.

class mongomotor.Document(*args, **kwargs)

Bases: mongoengine.document.Document

The base class used for defining the structure and properties of

collections of documents stored in MongoDB. Inherit from this class, and add fields as class attributes to define a document’s structure. Individual documents may then be created by making instances of the Document subclass.

By default, the MongoDB collection used to store documents created using a Document subclass will be the name of the subclass converted to lowercase. A different collection may be specified by providing collection to the meta dictionary in the class definition.

A Document subclass may be itself subclassed, to create a specialised version of the document that will be stored in the same collection. To facilitate this behaviour a _cls field is added to documents (hidden though the MongoEngine interface). To disable this behaviour and remove the dependence on the presence of _cls set allow_inheritance to False in the meta dictionary.

A Document may use a Capped Collection by specifying max_documents and max_size in the meta dictionary. max_documents is the maximum number of documents that is allowed to be stored in the collection, and max_size is the maximum size of the collection in bytes. max_size is rounded up to the next multiple of 256 by MongoDB internally and mongoengine before. Use also a multiple of 256 to avoid confusions. If max_size is not specified and max_documents is, max_size defaults to 10485760 bytes (10MB).

Indexes may be created by specifying indexes in the meta dictionary. The value should be a list of field names or tuples of field names. Index direction may be specified by prefixing the field names with a + or - sign.

Automatic index creation can be enabled by specifying auto_create_index in the meta dictionary. If this is set to True then indexes will be created by MongoMotor.

By default, _cls will be added to the start of every index (that doesn’t contain a list) if allow_inheritance is True. This can be disabled by either setting cls to False on the specific index or by setting index_cls to False on the meta dictionary for the document.

By default, any extra attribute existing in stored data but not declared
in your model will raise a mongoengine.FieldDoesNotExist error. This can be disabled by setting strict to False in the meta dictionary.
STRICT = False
cascade_save(**kwargs)

Recursively save any references and generic references on the document.

clean()

Hook for doing document level data cleaning before validation is run.

Any ValidationError raised by this method will not be associated with a particular field; it will have a special-case association with the field defined by NON_FIELD_ERRORS.

classmethod compare_indexes()

Compares the indexes defined in MongoEngine with the ones existing in the database. Returns any missing/extra indexes.

create_index(keys, background=False, **kwargs)

Creates the given indexes if required.

Parameters:
  • keys – a single index key or a list of index keys (to construct a multi-field index); keys may be prefixed with a + or a - to determine the index ordering
  • background – Allows index creation in the background
delete(signal_kwargs=None, **write_concern)

Delete the Document from the database. This will only take effect if the document has been previously saved.

Parm signal_kwargs:
 (optional) kwargs dictionary to be passed to the signal calls.
Parameters:write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
classmethod drop_collection()

Drops the entire collection associated with this mongomotor.Document type from the database.

classmethod ensure_index(key_or_list, drop_dups=False, background=False, **kwargs)

Ensure that the given indexes are in place. Deprecated in favour of create_index.

Parameters:
  • key_or_list – a single index key or a list of index keys (to construct a multi-field index); keys may be prefixed with a + or a - to determine the index ordering
  • background – Allows index creation in the background
  • drop_dups – Was removed/ignored with MongoDB >2.7.5. The value will be removed if PyMongo3+ is used
classmethod ensure_indexes()

Checks the document meta data and ensures all the indexes exist.

Global defaults can be set in the meta - see guide/defining-documents

Note

You can disable automatic index creation by setting auto_create_index to False in the documents meta data

from_json(json_data, created=False)

Converts json data to an unsaved document instance

get_text_score()

Get text score from text query

list_indexes()

Lists all of the indexes that should be created for given collection. It includes all the indexes from super- and sub-classes.

modify(query={}, **update)

Perform an atomic update of the document in the database and reload the document object using updated version.

Returns True if the document has been updated or False if the document in the database doesn’t match the query.

Note

All unsaved changes that have been made to the document are rejected if the method returns True.

Parameters:
  • query – the update will be performed only if the document in the database matches the query
  • update – Django-style update keyword arguments
my_metaclass

alias of TopLevelDocumentMetaclass

pk

Get the primary key.

register_delete_rule(document_cls, field_name, rule)

This method registers the delete rules to apply when removing this object.

coroutine reload(*fields, **kwargs)

Reloads all attributes from the database.

Parameters:
  • fields – (optional) args list of fields to reload
  • max_depth – (optional) depth of dereferencing to follow
coroutine save(force_insert=False, validate=True, clean=True, write_concern=None, cascade=None, cascade_kwargs=None, _refs=None, save_condition=None, signal_kwargs=None, **kwargs)

Save the Document to the database. If the document already exists, it will be updated, otherwise it will be created.

Parameters:
  • force_insert – only try to create a new document, don’t allow updates of existing documents.
  • validate – validates the document; set to False to skip.
  • clean – call the document clean method, requires validate to be True.
  • write_concern – Extra keyword arguments are passed down to save() OR insert() which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • cascade – Sets the flag for cascading saves. You can set a default by setting “cascade” in the document __meta__
  • cascade_kwargs – (optional) kwargs dictionary to be passed throw to cascading saves. Implies cascade=True.
  • _refs – A list of processed references used in cascading saves
  • save_condition – only perform save if matching record in db satisfies condition(s) (e.g. version number). Raises OperationError if the conditions are not satisfied
  • signal_kwargs – (optional) kwargs dictionary to be passed to the signal calls.

Handles dereferencing of DBRef objects to a maximum depth in order to cut down the number queries to mongodb.

switch_collection(collection_name, keep_created=True)

Temporarily switch the collection for a document instance.

Only really useful for archiving off data and calling save():

user = User.objects.get(id=user_id)
user.switch_collection('old-users')
user.save()
Parameters:
  • collection_name (str) – The database alias to use for saving the document
  • keep_created (bool) – keep self._created value after switching collection, else is reset to True

See also

Use switch_db if you need to read from another database

switch_db(db_alias, keep_created=True)

Temporarily switch the database for a document instance.

Only really useful for archiving off data and calling save():

user = User.objects.get(id=user_id)
user.switch_db('archive-db')
user.save()
Parameters:
  • db_alias (str) – The database alias to use for saving the document
  • keep_created (bool) – keep self._created value after switching db, else is reset to True

See also

Use switch_collection if you need to read from another collection

to_dbref()

Returns an instance of DBRef useful in __raw__ queries.

to_json(*args, **kwargs)

Convert this document to JSON.

Parameters:use_db_field – Serialize field names as they appear in MongoDB (as opposed to attribute names on this document). Defaults to True.
to_mongo(*args, **kwargs)
update(**kwargs)

Performs an update on the Document A convenience wrapper to update().

Raises OperationError if called on an object that has not yet been saved.

validate(clean=True)

Ensure that all fields’ values are valid and that required fields are present.

class mongomotor.DynamicDocument(*args, **kwargs)

Bases: mongomotor.document.Document, mongoengine.document.DynamicDocument

STRICT = False
cascade_save(**kwargs)

Recursively save any references and generic references on the document.

clean()

Hook for doing document level data cleaning before validation is run.

Any ValidationError raised by this method will not be associated with a particular field; it will have a special-case association with the field defined by NON_FIELD_ERRORS.

compare_indexes()

Compares the indexes defined in MongoEngine with the ones existing in the database. Returns any missing/extra indexes.

create_index(keys, background=False, **kwargs)

Creates the given indexes if required.

Parameters:
  • keys – a single index key or a list of index keys (to construct a multi-field index); keys may be prefixed with a + or a - to determine the index ordering
  • background – Allows index creation in the background
delete(signal_kwargs=None, **write_concern)

Delete the Document from the database. This will only take effect if the document has been previously saved.

Parm signal_kwargs:
 (optional) kwargs dictionary to be passed to the signal calls.
Parameters:write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
drop_collection()

Drops the entire collection associated with this mongomotor.Document type from the database.

ensure_index(key_or_list, drop_dups=False, background=False, **kwargs)

Ensure that the given indexes are in place. Deprecated in favour of create_index.

Parameters:
  • key_or_list – a single index key or a list of index keys (to construct a multi-field index); keys may be prefixed with a + or a - to determine the index ordering
  • background – Allows index creation in the background
  • drop_dups – Was removed/ignored with MongoDB >2.7.5. The value will be removed if PyMongo3+ is used
ensure_indexes()

Checks the document meta data and ensures all the indexes exist.

Global defaults can be set in the meta - see guide/defining-documents

Note

You can disable automatic index creation by setting auto_create_index to False in the documents meta data

from_json(json_data, created=False)

Converts json data to an unsaved document instance

get_text_score()

Get text score from text query

list_indexes()

Lists all of the indexes that should be created for given collection. It includes all the indexes from super- and sub-classes.

modify(query={}, **update)

Perform an atomic update of the document in the database and reload the document object using updated version.

Returns True if the document has been updated or False if the document in the database doesn’t match the query.

Note

All unsaved changes that have been made to the document are rejected if the method returns True.

Parameters:
  • query – the update will be performed only if the document in the database matches the query
  • update – Django-style update keyword arguments
my_metaclass

alias of TopLevelDocumentMetaclass

pk

Get the primary key.

register_delete_rule(document_cls, field_name, rule)

This method registers the delete rules to apply when removing this object.

coroutine reload(*fields, **kwargs)

Reloads all attributes from the database.

Parameters:
  • fields – (optional) args list of fields to reload
  • max_depth – (optional) depth of dereferencing to follow
coroutine save(force_insert=False, validate=True, clean=True, write_concern=None, cascade=None, cascade_kwargs=None, _refs=None, save_condition=None, signal_kwargs=None, **kwargs)

Save the Document to the database. If the document already exists, it will be updated, otherwise it will be created.

Parameters:
  • force_insert – only try to create a new document, don’t allow updates of existing documents.
  • validate – validates the document; set to False to skip.
  • clean – call the document clean method, requires validate to be True.
  • write_concern – Extra keyword arguments are passed down to save() OR insert() which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • cascade – Sets the flag for cascading saves. You can set a default by setting “cascade” in the document __meta__
  • cascade_kwargs – (optional) kwargs dictionary to be passed throw to cascading saves. Implies cascade=True.
  • _refs – A list of processed references used in cascading saves
  • save_condition – only perform save if matching record in db satisfies condition(s) (e.g. version number). Raises OperationError if the conditions are not satisfied
  • signal_kwargs – (optional) kwargs dictionary to be passed to the signal calls.

Handles dereferencing of DBRef objects to a maximum depth in order to cut down the number queries to mongodb.

switch_collection(collection_name, keep_created=True)

Temporarily switch the collection for a document instance.

Only really useful for archiving off data and calling save():

user = User.objects.get(id=user_id)
user.switch_collection('old-users')
user.save()
Parameters:
  • collection_name (str) – The database alias to use for saving the document
  • keep_created (bool) – keep self._created value after switching collection, else is reset to True

See also

Use switch_db if you need to read from another database

switch_db(db_alias, keep_created=True)

Temporarily switch the database for a document instance.

Only really useful for archiving off data and calling save():

user = User.objects.get(id=user_id)
user.switch_db('archive-db')
user.save()
Parameters:
  • db_alias (str) – The database alias to use for saving the document
  • keep_created (bool) – keep self._created value after switching db, else is reset to True

See also

Use switch_collection if you need to read from another collection

to_dbref()

Returns an instance of DBRef useful in __raw__ queries.

to_json(*args, **kwargs)

Convert this document to JSON.

Parameters:use_db_field – Serialize field names as they appear in MongoDB (as opposed to attribute names on this document). Defaults to True.
to_mongo(*args, **kwargs)
update(**kwargs)

Performs an update on the Document A convenience wrapper to update().

Raises OperationError if called on an object that has not yet been saved.

validate(clean=True)

Ensure that all fields’ values are valid and that required fields are present.

class mongomotor.EmbeddedDocument(*args, **kwargs)

Bases: mongoengine.base.document.BaseDocument

A Document that isn’t stored in its own collection. EmbeddedDocuments should be used as fields on Documents through the EmbeddedDocumentField field type.

A EmbeddedDocument subclass may be itself subclassed, to create a specialised version of the embedded document that will be stored in the same collection. To facilitate this behaviour a _cls field is added to documents (hidden though the MongoEngine interface). To enable this behaviour set allow_inheritance to True in the meta dictionary.

STRICT = False
clean()

Hook for doing document level data cleaning before validation is run.

Any ValidationError raised by this method will not be associated with a particular field; it will have a special-case association with the field defined by NON_FIELD_ERRORS.

from_json(json_data, created=False)

Converts json data to an unsaved document instance

get_text_score()

Get text score from text query

my_metaclass

alias of DocumentMetaclass

reload(*args, **kwargs)
save(*args, **kwargs)
to_json(*args, **kwargs)

Convert this document to JSON.

Parameters:use_db_field – Serialize field names as they appear in MongoDB (as opposed to attribute names on this document). Defaults to True.
to_mongo(*args, **kwargs)
validate(clean=True)

Ensure that all fields’ values are valid and that required fields are present.

class mongomotor.DynamicEmbeddedDocument(*args, **kwargs)

Bases: mongoengine.document.EmbeddedDocument

A Dynamic Embedded Document class allowing flexible, expandable and uncontrolled schemas. See DynamicDocument for more information about dynamic documents.

STRICT = False
clean()

Hook for doing document level data cleaning before validation is run.

Any ValidationError raised by this method will not be associated with a particular field; it will have a special-case association with the field defined by NON_FIELD_ERRORS.

from_json(json_data, created=False)

Converts json data to an unsaved document instance

get_text_score()

Get text score from text query

my_metaclass

alias of DocumentMetaclass

reload(*args, **kwargs)
save(*args, **kwargs)
to_json(*args, **kwargs)

Convert this document to JSON.

Parameters:use_db_field – Serialize field names as they appear in MongoDB (as opposed to attribute names on this document). Defaults to True.
to_mongo(*args, **kwargs)
validate(clean=True)

Ensure that all fields’ values are valid and that required fields are present.

class mongomotor.MapReduceDocument(document, collection, key, value)

Bases: builtins.object

A document returned from a map/reduce query.

Parameters:
  • collection – An instance of Collection
  • key – Document/result key, often an instance of ObjectId. If supplied as an ObjectId found in the given collection, the object can be accessed via the object property.
  • value – The result(s) for this key.

New in version 0.3.

object

Lazy-load the object referenced by self.key. self.key should be the primary_key.

Submodules

mongomotor.fields module

class mongomotor.fields.BaseAsyncReferenceField[source]

Bases: builtins.object

Base class to asynchronize reference fields.

class mongomotor.fields.ComplexBaseField(db_field=None, name=None, required=False, default=None, unique=False, unique_with=None, primary_key=False, validation=None, choices=None, null=False, sparse=False, **kwargs)[source]

Bases: mongoengine.base.fields.ComplexBaseField

class mongomotor.fields.DictField(basecls=None, field=None, *args, **kwargs)[source]

Bases: mongomotor.fields.ComplexBaseField, mongoengine.fields.DictField

class mongomotor.fields.FileField(db_alias='default', collection_name='fs', **kwargs)[source]

Bases: mongoengine.fields.FileField

proxy_class
class mongomotor.fields.GenericReferenceField(*args, **kwargs)[source]

Bases: mongomotor.fields.BaseAsyncReferenceField, mongoengine.fields.GenericReferenceField

mongomotor.fields.GridFSProxy[source]
class mongomotor.fields.ListField(field=None, **kwargs)[source]

Bases: mongomotor.fields.ComplexBaseField, mongoengine.fields.ListField

class mongomotor.fields.ReferenceField(document_type, dbref=False, reverse_delete_rule=0, **kwargs)[source]

Bases: mongomotor.fields.BaseAsyncReferenceField, mongoengine.fields.ReferenceField

A reference to a document that will be automatically dereferenced on access (lazily).

Use the reverse_delete_rule to handle what should happen if the document the field is referencing is deleted. EmbeddedDocuments, DictFields and MapFields does not support reverse_delete_rule and an InvalidDocumentError will be raised if trying to set on one of these Document / Field types.

The options are:

  • DO_NOTHING (0) - don’t do anything (default).
  • NULLIFY (1) - Updates the reference to null.
  • CASCADE (2) - Deletes the documents associated with the reference.
  • DENY (3) - Prevent the deletion of the reference object.
  • PULL (4) - Pull the reference from a ListField of references

Alternative syntax for registering delete rules (useful when implementing bi-directional delete rules)

class Bar(Document):
    content = StringField()
    foo = ReferenceField('Foo')

Foo.register_delete_rule(Bar, 'foo', NULLIFY)

mongomotor.queryset module

class mongomotor.queryset.QuerySet(document, collection)[source]

Bases: mongoengine.queryset.queryset.QuerySet

aggregate(*pipeline, **kwargs)

Perform a aggregate function based in your queryset params :param pipeline: list of aggregation commands, see: http://docs.mongodb.org/manual/core/aggregation-pipeline/

coroutine aggregate_average(field)[source]

Average over the values of the specified field.

Parameters:field – the field to average over; use dot-notation to refer to embedded document fields

This method is more performant than the regular average, because it uses the aggregation framework instead of map-reduce.

aggregate_sum(field)[source]

Sum over the values of the specified field.

Parameters:field – the field to sum over; use dot-notation to refer to embedded document fields

This method is more performant than the regular sum, because it uses the aggregation framework instead of map-reduce.

aiter_compat(func)
all()

Returns all documents.

all_fields()

Include all fields. Reset all previously calls of .only() or .exclude().

post = BlogPost.objects.exclude('comments').all_fields()
as_pymongo()

Instead of returning Document instances, return raw values from pymongo.

This method is particularly useful if you don’t need dereferencing and care primarily about the speed of data retrieval.

coroutine average(field)[source]

Average over the values of the specified field.

Parameters:field – the field to average over; use dot-notation to refer to embedded document fields
batch_size(size)

Limit the number of documents returned in a single batch (each batch requires a round trip to the server).

See http://api.mongodb.com/python/current/api/pymongo/cursor.html#pymongo.cursor.Cursor.batch_size for details.

Parameters:size – desired size of each batch.
clone()

Create a copy of the current queryset.

comment(text)

Add a comment to the query.

See https://docs.mongodb.com/manual/reference/method/cursor.comment/#cursor.comment for details.

coroutine count(with_limit_and_skip=True)[source]

Counts the documents in the queryset.

Parameters:with_limit_and_skip – Indicates if limit and skip applied to the queryset should be taken into account.
create(**kwargs)

Create new object. Returns the saved object instance.

delete(write_concern=None, _from_doc_delete=False, cascade_refs=None)[source]

Deletes the documents matched by the query.

Parameters:
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • _from_doc_delete – True when called from document delete therefore signals will have been triggered so don’t loop.

:returns number of deleted documents

coroutine distinct(field)

Return a list of distinct values for a given field.

Parameters:field – the field to select distinct values from

Note

This is a command and won’t take ordering or limit into account.

ensure_index(**kwargs)

Deprecated use Document.ensure_index()

exclude(*fields)

Opposite to .only(), exclude some document’s fields.

post = BlogPost.objects(...).exclude('comments')

Note

exclude() is chainable and will perform a union :: So with the following it will exclude both: title and author.name:

post = BlogPost.objects.exclude('title').exclude('author.name')

all_fields() will reset any field filters.

Parameters:fields – fields to exclude
exec_js(code, *fields, **options)

Execute a Javascript function on the server. A list of fields may be provided, which will be translated to their correct names and supplied as the arguments to the function. A few extra variables are added to the function’s scope: collection, which is the name of the collection in use; query, which is an object representing the current query; and options, which is an object containing any options specified as keyword arguments.

As fields in MongoEngine may use different names in the database (set using the db_field keyword argument to a Field constructor), a mechanism exists for replacing MongoEngine field names with the database field names in Javascript code. When accessing a field, use square-bracket notation, and prefix the MongoEngine field name with a tilde (~).

Parameters:
  • code – a string of Javascript code to execute
  • fields – fields that you will be using in your function, which will be passed in to your function as arguments
  • options – options that you want available to the function (accessed in Javascript through the options object)
coroutine explain(format=False)

Return an explain plan record for the QuerySet‘s cursor.

Parameters:format – format the plan before returning it
fetch_next[source]
fields(_only_called=False, **kwargs)

Manipulate how you load this document’s fields. Used by .only() and .exclude() to manipulate which fields to retrieve. If called directly, use a set of kwargs similar to the MongoDB projection document. For example:

Include only a subset of fields:

posts = BlogPost.objects(...).fields(author=1, title=1)

Exclude a specific field:

posts = BlogPost.objects(...).fields(comments=0)

To retrieve a subrange of array elements:

posts = BlogPost.objects(...).fields(slice__comments=5)
Parameters:kwargs – A set of keyword arguments identifying what to include, exclude, or slice.
filter(*q_objs, **query)

An alias of __call__()

coroutine first()[source]

Retrieve the first object matching the query.

from_json(json_data)

Converts json data to unsaved objects

coroutine get(*q_objs, **query)[source]

Retrieve the the matching object raising MultipleObjectsReturned or DocumentName.MultipleObjectsReturned exception if multiple results and DoesNotExist or DocumentName.DoesNotExist if no results are found.

hint(index=None)

Added ‘hint’ support, telling Mongo the proper index to use for the query.

Judicious use of hints can greatly improve query performance. When doing a query on multiple fields (at least one of which is indexed) pass the indexed field as a hint to the query.

Hinting will not do anything if the corresponding index does not exist. The last hint applied to this cursor takes precedence over all others.

coroutine in_bulk(object_ids)

Retrieve a set of documents by their ids.

Parameters:object_ids – a list or tuple of ObjectIds
Return type:dict of ObjectIds as keys and collection-specific Document subclasses as values.
coroutine inline_map_reduce(map_f, reduce_f, **mr_kwargs)[source]

Perform a map/reduce query using the current query spec and ordering. While map_reduce respects QuerySet chaining, it must be the last call made, as it does not return a maleable QuerySet.

Parameters:

Returns a generator of MapReduceDocument with the map/reduce results.

Note

This method only works with inline map/reduce. If you want to send the output to a collection use map_reduce().

coroutine insert(doc_or_docs, load_bulk=True, write_concern=None)[source]

bulk insert documents

Parameters:
  • doc_or_docs – a document or list of documents to be inserted
  • (optional) (load_bulk) – If True returns the list of document instances
  • write_concern – Extra keyword arguments are passed down to insert() which will be used as options for the resultant getLastError command. For example, insert(..., {w: 2, fsync: True}) will wait until at least two servers have recorded the write and will force an fsync on each server being written to.

By default returns document instances, set load_bulk to False to return just ObjectIds

coroutine item_frequencies(field, normalize=False)[source]

Returns a dictionary of all items present in a field across the whole queried set of documents, and their corresponding frequency. This is useful for generating tag clouds, or searching documents.

Note

Can only do direct simple mappings and cannot map across ReferenceField or GenericReferenceField for more complex counting a manual map reduce call would is required.

If the field is a ListField, the items within each list will be counted individually.

Parameters:
  • field – the field to use
  • normalize – normalize the results so they add to 1.0
limit(n)

Limit the number of returned documents to n. This may also be achieved using array-slicing syntax (e.g. User.objects[:5]).

Parameters:n – the maximum number of objects to return
coroutine map_reduce(map_f, reduce_f, output, **mr_kwargs)[source]

Perform a map/reduce query using the current query spec and ordering. While map_reduce respects QuerySet chaining, it must be the last call made, as it does not return a maleable QuerySet.

Parameters:
  • map_f – map function, as Code or string
  • reduce_f – reduce function, as Code or string
  • output – output collection name, if set to ‘inline’ will try to use inline_map_reduce This can also be a dictionary containing output options.
  • mr_kwargs – Arguments for mongodb map_reduce see: https://docs.mongodb.com/manual/reference/command/mapReduce/ for more information

Returns a dict with the full response of the server

Note

This is different from mongoengine’s map_reduce. It does not support inline map reduce, for that use inline_map_reduce(). And It does not return a generator with MapReduceDocument, but returns the server response instead.

max_time_ms(ms)

Wait ms milliseconds before killing the query on the server

Parameters:ms – the number of milliseconds before killing the query on the server
coroutine modify(upsert=False, full_response=False, remove=False, new=False, **update)

Update and return the updated document.

Returns either the document before or after modification based on new parameter. If no documents match the query and upsert is false, returns None. If upserting and new is false, returns None.

If the full_response parameter is True, the return value will be the entire response object from the server, including the ‘ok’ and ‘lastErrorObject’ fields, rather than just the modified document. This is useful mainly because the ‘lastErrorObject’ document holds information about the command’s execution.

Parameters:
  • upsert – insert if document doesn’t exist (default False)
  • full_response – return the entire response object from the server (default False, not available for PyMongo 3+)
  • remove – remove rather than updating (default False)
  • new – return updated rather than original document (default False)
  • update – Django-style update keyword arguments
next_object()[source]
no_cache()[source]

Convert to a non-caching queryset

no_dereference()

Turn off any dereferencing for the results of this queryset.

no_sub_classes()

Only return instances of this document and not any inherited documents

none()

Helper that just returns a list

only(*fields)

Load only a subset of this document’s fields.

post = BlogPost.objects(...).only('title', 'author.name')

Note

only() is chainable and will perform a union :: So with the following it will fetch both: title and author.name:

post = BlogPost.objects.only('title').only('author.name')

all_fields() will reset any field filters.

Parameters:fields – fields to include
order_by(*keys)

Order the QuerySet by the keys. The order may be specified by prepending each of the keys by a + or a -. Ascending order is assumed. If no keys are passed, existing ordering is cleared instead.

Parameters:keys – fields to order the query results by; keys may be prefixed with + or - to determine the ordering direction
read_preference(read_preference)

Change the read_preference when querying.

Parameters:read_preference – override ReplicaSetConnection-level preference.
rewind()

Rewind the cursor to its unevaluated state.

scalar(*fields)

Instead of returning Document instances, return either a specific value or a tuple of values in order.

Can be used along with no_dereference() to turn off dereferencing.

Note

This effects all results and can be unset by calling scalar without arguments. Calls only automatically.

Parameters:fields – One or more fields to return instead of a Document.
search_text(text, language=None)

Start a text search, using text indexes. Require: MongoDB server version 2.6+.

Parameters:language – The language that determines the list of stop words for the search and the rules for the stemmer and tokenizer. If not specified, the search uses the default language of the index. For supported languages, see Text Search Languages <http://docs.mongodb.org/manual/reference/text-search-languages/#text-search-languages>.

Handles dereferencing of DBRef objects or ObjectId a maximum depth in order to cut down the number queries to mongodb.

skip(n)

Skip n documents before returning the results. This may also be achieved using array-slicing syntax (e.g. User.objects[5:]).

Parameters:n – the number of objects to skip before returning results
slave_okay(enabled)

Enable or disable the slave_okay when querying.

Parameters:enabled – whether or not the slave_okay is enabled
snapshot(enabled)

Enable or disable snapshot mode when querying.

Parameters:enabled – whether or not snapshot mode is enabled

..versionchanged:: 0.5 - made chainable .. deprecated:: Ignored with PyMongo 3+

sum(field)[source]

Sum over the values of the specified field.

Parameters:field – the field to sum over; use dot-notation to refer to embedded document fields
timeout(enabled)

Enable or disable the default mongod timeout when querying.

Parameters:enabled – whether or not the timeout is used

..versionchanged:: 0.5 - made chainable

to_json(*args, **kwargs)

Converts a queryset to JSON

coroutine to_list(length=100)[source]

Returns a list of the current documents in the queryset.

Parameters:length – maximum number of documents to return for this call.
coroutine update(upsert=False, multi=True, write_concern=None, full_result=False, **update)

Perform an atomic update on the fields matched by the query.

Parameters:
  • upsert – insert if document doesn’t exist (default False)
  • multi – Update multiple documents.
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • full_result – Return the full result dictionary rather than just the number updated, e.g. return {'n': 2, 'nModified': 2, 'ok': 1.0, 'updatedExisting': True}.
  • update – Django-style update keyword arguments
update_one(upsert=False, write_concern=None, **update)

Perform an atomic update on the fields of the first document matched by the query.

Parameters:
  • upsert – insert if document doesn’t exist (default False)
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • update – Django-style update keyword arguments
coroutine upsert_one(write_concern=None, **update)[source]

Overwrite or add the first document matched by the query.

Parameters:
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • update – Django-style update keyword arguments

:returns the new or overwritten document

using(alias)

This method is for controlling which database the QuerySet will be evaluated against if you are using more than one database.

Parameters:alias – The database alias
values_list(*fields)

An alias for scalar

where(where_clause)

Filter QuerySet results with a $where clause (a Javascript expression). Performs automatic field name substitution like mongoengine.queryset.Queryset.exec_js().

Note

When using this mode of query, the database will call your function, or evaluate your predicate clause, for each object in the collection.

with_id(object_id)

Retrieve the object matching the id provided. Uses object_id only and raises InvalidQueryError if a filter has been applied. Returns None if no document exists with that id.

Parameters:object_id – the value for the id of the document to look up
class mongomotor.queryset.QuerySetNoCache(document, collection)[source]

Bases: mongomotor.queryset.QuerySet

A non caching QuerySet

aggregate(*pipeline, **kwargs)

Perform a aggregate function based in your queryset params :param pipeline: list of aggregation commands, see: http://docs.mongodb.org/manual/core/aggregation-pipeline/

coroutine aggregate_average(field)

Average over the values of the specified field.

Parameters:field – the field to average over; use dot-notation to refer to embedded document fields

This method is more performant than the regular average, because it uses the aggregation framework instead of map-reduce.

aggregate_sum(field)

Sum over the values of the specified field.

Parameters:field – the field to sum over; use dot-notation to refer to embedded document fields

This method is more performant than the regular sum, because it uses the aggregation framework instead of map-reduce.

aiter_compat(func)
all()

Returns all documents.

all_fields()

Include all fields. Reset all previously calls of .only() or .exclude().

post = BlogPost.objects.exclude('comments').all_fields()
as_pymongo()

Instead of returning Document instances, return raw values from pymongo.

This method is particularly useful if you don’t need dereferencing and care primarily about the speed of data retrieval.

coroutine average(field)

Average over the values of the specified field.

Parameters:field – the field to average over; use dot-notation to refer to embedded document fields
batch_size(size)

Limit the number of documents returned in a single batch (each batch requires a round trip to the server).

See http://api.mongodb.com/python/current/api/pymongo/cursor.html#pymongo.cursor.Cursor.batch_size for details.

Parameters:size – desired size of each batch.
cache()[source]

Convert to a caching queryset

clone()

Create a copy of the current queryset.

comment(text)

Add a comment to the query.

See https://docs.mongodb.com/manual/reference/method/cursor.comment/#cursor.comment for details.

coroutine count(with_limit_and_skip=True)

Counts the documents in the queryset.

Parameters:with_limit_and_skip – Indicates if limit and skip applied to the queryset should be taken into account.
create(**kwargs)

Create new object. Returns the saved object instance.

delete(write_concern=None, _from_doc_delete=False, cascade_refs=None)

Deletes the documents matched by the query.

Parameters:
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • _from_doc_delete – True when called from document delete therefore signals will have been triggered so don’t loop.

:returns number of deleted documents

coroutine distinct(field)

Return a list of distinct values for a given field.

Parameters:field – the field to select distinct values from

Note

This is a command and won’t take ordering or limit into account.

ensure_index(**kwargs)

Deprecated use Document.ensure_index()

exclude(*fields)

Opposite to .only(), exclude some document’s fields.

post = BlogPost.objects(...).exclude('comments')

Note

exclude() is chainable and will perform a union :: So with the following it will exclude both: title and author.name:

post = BlogPost.objects.exclude('title').exclude('author.name')

all_fields() will reset any field filters.

Parameters:fields – fields to exclude
exec_js(code, *fields, **options)

Execute a Javascript function on the server. A list of fields may be provided, which will be translated to their correct names and supplied as the arguments to the function. A few extra variables are added to the function’s scope: collection, which is the name of the collection in use; query, which is an object representing the current query; and options, which is an object containing any options specified as keyword arguments.

As fields in MongoEngine may use different names in the database (set using the db_field keyword argument to a Field constructor), a mechanism exists for replacing MongoEngine field names with the database field names in Javascript code. When accessing a field, use square-bracket notation, and prefix the MongoEngine field name with a tilde (~).

Parameters:
  • code – a string of Javascript code to execute
  • fields – fields that you will be using in your function, which will be passed in to your function as arguments
  • options – options that you want available to the function (accessed in Javascript through the options object)
coroutine explain(format=False)

Return an explain plan record for the QuerySet‘s cursor.

Parameters:format – format the plan before returning it
fetch_next
fields(_only_called=False, **kwargs)

Manipulate how you load this document’s fields. Used by .only() and .exclude() to manipulate which fields to retrieve. If called directly, use a set of kwargs similar to the MongoDB projection document. For example:

Include only a subset of fields:

posts = BlogPost.objects(...).fields(author=1, title=1)

Exclude a specific field:

posts = BlogPost.objects(...).fields(comments=0)

To retrieve a subrange of array elements:

posts = BlogPost.objects(...).fields(slice__comments=5)
Parameters:kwargs – A set of keyword arguments identifying what to include, exclude, or slice.
filter(*q_objs, **query)

An alias of __call__()

coroutine first()

Retrieve the first object matching the query.

from_json(json_data)

Converts json data to unsaved objects

coroutine get(*q_objs, **query)

Retrieve the the matching object raising MultipleObjectsReturned or DocumentName.MultipleObjectsReturned exception if multiple results and DoesNotExist or DocumentName.DoesNotExist if no results are found.

hint(index=None)

Added ‘hint’ support, telling Mongo the proper index to use for the query.

Judicious use of hints can greatly improve query performance. When doing a query on multiple fields (at least one of which is indexed) pass the indexed field as a hint to the query.

Hinting will not do anything if the corresponding index does not exist. The last hint applied to this cursor takes precedence over all others.

coroutine in_bulk(object_ids)

Retrieve a set of documents by their ids.

Parameters:object_ids – a list or tuple of ObjectIds
Return type:dict of ObjectIds as keys and collection-specific Document subclasses as values.
coroutine inline_map_reduce(map_f, reduce_f, **mr_kwargs)

Perform a map/reduce query using the current query spec and ordering. While map_reduce respects QuerySet chaining, it must be the last call made, as it does not return a maleable QuerySet.

Parameters:

Returns a generator of MapReduceDocument with the map/reduce results.

Note

This method only works with inline map/reduce. If you want to send the output to a collection use map_reduce().

coroutine insert(doc_or_docs, load_bulk=True, write_concern=None)

bulk insert documents

Parameters:
  • doc_or_docs – a document or list of documents to be inserted
  • (optional) (load_bulk) – If True returns the list of document instances
  • write_concern – Extra keyword arguments are passed down to insert() which will be used as options for the resultant getLastError command. For example, insert(..., {w: 2, fsync: True}) will wait until at least two servers have recorded the write and will force an fsync on each server being written to.

By default returns document instances, set load_bulk to False to return just ObjectIds

coroutine item_frequencies(field, normalize=False)

Returns a dictionary of all items present in a field across the whole queried set of documents, and their corresponding frequency. This is useful for generating tag clouds, or searching documents.

Note

Can only do direct simple mappings and cannot map across ReferenceField or GenericReferenceField for more complex counting a manual map reduce call would is required.

If the field is a ListField, the items within each list will be counted individually.

Parameters:
  • field – the field to use
  • normalize – normalize the results so they add to 1.0
limit(n)

Limit the number of returned documents to n. This may also be achieved using array-slicing syntax (e.g. User.objects[:5]).

Parameters:n – the maximum number of objects to return
coroutine map_reduce(map_f, reduce_f, output, **mr_kwargs)

Perform a map/reduce query using the current query spec and ordering. While map_reduce respects QuerySet chaining, it must be the last call made, as it does not return a maleable QuerySet.

Parameters:
  • map_f – map function, as Code or string
  • reduce_f – reduce function, as Code or string
  • output – output collection name, if set to ‘inline’ will try to use inline_map_reduce This can also be a dictionary containing output options.
  • mr_kwargs – Arguments for mongodb map_reduce see: https://docs.mongodb.com/manual/reference/command/mapReduce/ for more information

Returns a dict with the full response of the server

Note

This is different from mongoengine’s map_reduce. It does not support inline map reduce, for that use inline_map_reduce(). And It does not return a generator with MapReduceDocument, but returns the server response instead.

max_time_ms(ms)

Wait ms milliseconds before killing the query on the server

Parameters:ms – the number of milliseconds before killing the query on the server
coroutine modify(upsert=False, full_response=False, remove=False, new=False, **update)

Update and return the updated document.

Returns either the document before or after modification based on new parameter. If no documents match the query and upsert is false, returns None. If upserting and new is false, returns None.

If the full_response parameter is True, the return value will be the entire response object from the server, including the ‘ok’ and ‘lastErrorObject’ fields, rather than just the modified document. This is useful mainly because the ‘lastErrorObject’ document holds information about the command’s execution.

Parameters:
  • upsert – insert if document doesn’t exist (default False)
  • full_response – return the entire response object from the server (default False, not available for PyMongo 3+)
  • remove – remove rather than updating (default False)
  • new – return updated rather than original document (default False)
  • update – Django-style update keyword arguments
next_object()
no_cache()

Convert to a non-caching queryset

no_dereference()

Turn off any dereferencing for the results of this queryset.

no_sub_classes()

Only return instances of this document and not any inherited documents

none()

Helper that just returns a list

only(*fields)

Load only a subset of this document’s fields.

post = BlogPost.objects(...).only('title', 'author.name')

Note

only() is chainable and will perform a union :: So with the following it will fetch both: title and author.name:

post = BlogPost.objects.only('title').only('author.name')

all_fields() will reset any field filters.

Parameters:fields – fields to include
order_by(*keys)

Order the QuerySet by the keys. The order may be specified by prepending each of the keys by a + or a -. Ascending order is assumed. If no keys are passed, existing ordering is cleared instead.

Parameters:keys – fields to order the query results by; keys may be prefixed with + or - to determine the ordering direction
read_preference(read_preference)

Change the read_preference when querying.

Parameters:read_preference – override ReplicaSetConnection-level preference.
rewind()

Rewind the cursor to its unevaluated state.

scalar(*fields)

Instead of returning Document instances, return either a specific value or a tuple of values in order.

Can be used along with no_dereference() to turn off dereferencing.

Note

This effects all results and can be unset by calling scalar without arguments. Calls only automatically.

Parameters:fields – One or more fields to return instead of a Document.
search_text(text, language=None)

Start a text search, using text indexes. Require: MongoDB server version 2.6+.

Parameters:language – The language that determines the list of stop words for the search and the rules for the stemmer and tokenizer. If not specified, the search uses the default language of the index. For supported languages, see Text Search Languages <http://docs.mongodb.org/manual/reference/text-search-languages/#text-search-languages>.

Handles dereferencing of DBRef objects or ObjectId a maximum depth in order to cut down the number queries to mongodb.

skip(n)

Skip n documents before returning the results. This may also be achieved using array-slicing syntax (e.g. User.objects[5:]).

Parameters:n – the number of objects to skip before returning results
slave_okay(enabled)

Enable or disable the slave_okay when querying.

Parameters:enabled – whether or not the slave_okay is enabled
snapshot(enabled)

Enable or disable snapshot mode when querying.

Parameters:enabled – whether or not snapshot mode is enabled

..versionchanged:: 0.5 - made chainable .. deprecated:: Ignored with PyMongo 3+

sum(field)

Sum over the values of the specified field.

Parameters:field – the field to sum over; use dot-notation to refer to embedded document fields
timeout(enabled)

Enable or disable the default mongod timeout when querying.

Parameters:enabled – whether or not the timeout is used

..versionchanged:: 0.5 - made chainable

to_json(*args, **kwargs)

Converts a queryset to JSON

coroutine to_list(length=100)

Returns a list of the current documents in the queryset.

Parameters:length – maximum number of documents to return for this call.
coroutine update(upsert=False, multi=True, write_concern=None, full_result=False, **update)

Perform an atomic update on the fields matched by the query.

Parameters:
  • upsert – insert if document doesn’t exist (default False)
  • multi – Update multiple documents.
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • full_result – Return the full result dictionary rather than just the number updated, e.g. return {'n': 2, 'nModified': 2, 'ok': 1.0, 'updatedExisting': True}.
  • update – Django-style update keyword arguments
update_one(upsert=False, write_concern=None, **update)

Perform an atomic update on the fields of the first document matched by the query.

Parameters:
  • upsert – insert if document doesn’t exist (default False)
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • update – Django-style update keyword arguments
coroutine upsert_one(write_concern=None, **update)

Overwrite or add the first document matched by the query.

Parameters:
  • write_concern – Extra keyword arguments are passed down which will be used as options for the resultant getLastError command. For example, save(..., write_concern={w: 2, fsync: True}, ...) will wait until at least two servers have recorded the write and will force an fsync on the primary server.
  • update – Django-style update keyword arguments

:returns the new or overwritten document

using(alias)

This method is for controlling which database the QuerySet will be evaluated against if you are using more than one database.

Parameters:alias – The database alias
values_list(*fields)

An alias for scalar

where(where_clause)

Filter QuerySet results with a $where clause (a Javascript expression). Performs automatic field name substitution like mongoengine.queryset.Queryset.exec_js().

Note

When using this mode of query, the database will call your function, or evaluate your predicate clause, for each object in the collection.

with_id(object_id)

Retrieve the object matching the id provided. Uses object_id only and raises InvalidQueryError if a filter has been applied. Returns None if no document exists with that id.

Parameters:object_id – the value for the id of the document to look up