Was this page helpful?
ScyllaDB Python Driver is available under the Apache v2 License. ScyllaDB Python Driver is a fork of DataStax Python Driver. See Copyright here.
cassandra.query
- Prepared Statements, Batch Statements, Tracing, and Row Factories
cassandra.query
- Prepared Statements, Batch Statements, Tracing, and Row Factories¶Returns each row as a tuple
Example:
>>> from cassandra.query import tuple_factory
>>> session = cluster.connect('mykeyspace')
>>> session.row_factory = tuple_factory
>>> rows = session.execute("SELECT name, age FROM users LIMIT 1")
>>> print(rows[0])
('Bob', 42)
Changed in version 2.0.0: moved from cassandra.decoder
to cassandra.query
Returns each row as a namedtuple. This is the default row factory.
Example:
>>> from cassandra.query import named_tuple_factory
>>> session = cluster.connect('mykeyspace')
>>> session.row_factory = named_tuple_factory
>>> rows = session.execute("SELECT name, age FROM users LIMIT 1")
>>> user = rows[0]
>>> # you can access field by their name:
>>> print("name: %s, age: %d" % (user.name, user.age))
name: Bob, age: 42
>>> # or you can access fields by their position (like a tuple)
>>> name, age = user
>>> print("name: %s, age: %d" % (name, age))
name: Bob, age: 42
>>> name = user[0]
>>> age = user[1]
>>> print("name: %s, age: %d" % (name, age))
name: Bob, age: 42
Changed in version 2.0.0: moved from cassandra.decoder
to cassandra.query
Returns each row as a dict.
Example:
>>> from cassandra.query import dict_factory
>>> session = cluster.connect('mykeyspace')
>>> session.row_factory = dict_factory
>>> rows = session.execute("SELECT name, age FROM users LIMIT 1")
>>> print(rows[0])
{u'age': 42, u'name': u'Bob'}
Changed in version 2.0.0: moved from cassandra.decoder
to cassandra.query
Like dict_factory()
, but returns each row as an OrderedDict,
so the order of the columns is preserved.
Changed in version 2.0.0: moved from cassandra.decoder
to cassandra.query
A simple, un-prepared query.
query_string should be a literal CQL statement with the exception
of parameter placeholders that will be filled through the
parameters argument of Session.execute()
.
See Statement
attributes for a description of the other parameters.
A statement that has been prepared against at least one Cassandra node.
Instances of this class should not be created directly, but through
Session.prepare()
.
A PreparedStatement
should be prepared only once. Re-preparing a statement
may affect performance (as the operation requires a network roundtrip).
A note about *
in prepared statements: Do not use *
in prepared statements if you might
change the schema of the table being queried. The driver and server each
maintain a map between metadata for a schema and statements that were
prepared against that schema. When a user changes a schema, e.g. by adding
or removing a column, the server invalidates its mappings involving that
schema. However, there is currently no way to propagate that invalidation
to drivers. Thus, after a schema change, the driver will incorrectly
interpret the results of SELECT *
queries prepared before the schema
change. This is currently being addressed in CASSANDRA-10786.
Creates and returns a BoundStatement
instance using values.
See BoundStatement.bind()
for rules on input values
.
A prepared statement that has been bound to a particular set of values.
These may be created directly or through PreparedStatement.bind()
.
prepared_statement should be an instance of PreparedStatement
.
See Statement
attributes for a description of the other parameters.
Binds a sequence of values for the prepared statement parameters and returns this instance. Note that values must be:
a sequence, even if you are only binding one value, or
a dict that relates 1-to-1 between dict keys and columns
Changed in version 2.6.0: UNSET_VALUE
was introduced. These can be bound as positional parameters
in a sequence, or by name in a dict. Additionally, when using protocol v4+:
short sequences will be extended to match bind parameters with UNSET_VALUE
names may be omitted from a dict with UNSET_VALUE implied.
Changed in version 3.0.0: method will not throw if extra keys are present in bound dict (PYTHON-178)
The partition_key
portion of the primary key,
which can be used to determine which nodes are replicas for the query.
If the partition key is a composite, a list or tuple must be passed in. Each key component should be in its packed (binary) format, so all components should be strings.
An abstract class representing a single query. There are three subclasses:
SimpleStatement
, BoundStatement
, and BatchStatement
.
These can be passed to Session.execute()
.
The partition_key
portion of the primary key,
which can be used to determine which nodes are replicas for the query.
If the partition key is a composite, a list or tuple must be passed in. Each key component should be in its packed (binary) format, so all components should be strings.
The serial consistency level is only used by conditional updates
(INSERT
, UPDATE
and DELETE
with an IF
condition). For
those, the serial_consistency_level
defines the consistency level of
the serial phase (or “paxos” phase) while the normal
consistency_level
defines the consistency for the “learn” phase,
i.e. what type of reads will be guaranteed to see the update right away.
For example, if a conditional write has a consistency_level
of
QUORUM
(and is successful), then a
QUORUM
read is guaranteed to see that write.
But if the regular consistency_level
of that write is
ANY
, then only a read with a
consistency_level
of SERIAL
is
guaranteed to see it (even a read with consistency
ALL
is not guaranteed to be enough).
The serial consistency can only be one of SERIAL
or LOCAL_SERIAL
. While SERIAL
guarantees full
linearizability (with other SERIAL
updates), LOCAL_SERIAL
only
guarantees it in the local data center.
The serial consistency level is ignored for any query that is not a
conditional update. Serial reads should use the regular
consistency_level
.
Serial consistency levels may only be used against Cassandra 2.0+
and the protocol_version
must be set to 2 or higher.
See Lightweight Transactions (Compare-and-set) for a discussion on how to work with results returned from conditional statements.
Added in version 2.0.0.
The base class of the class hierarchy.
When called, it accepts no arguments and returns a new featureless instance that has no instance attributes and cannot be given any.
A protocol-level batch of operations which are applied atomically by default.
Added in version 2.0.0.
batch_type specifies The BatchType
for the batch operation.
Defaults to BatchType.LOGGED
.
retry_policy should be a RetryPolicy
instance for
controlling retries on the operation.
consistency_level should be a ConsistencyLevel
value
to be used for all operations in the batch.
custom_payload is a Custom Payloads passed to the server. Note: as Statement objects are added to the batch, this map is updated with any values found in their custom payloads. These are only allowed when using protocol version 4 or higher.
Example usage:
insert_user = session.prepare("INSERT INTO users (name, age) VALUES (?, ?)")
batch = BatchStatement(consistency_level=ConsistencyLevel.QUORUM)
for (name, age) in users_to_insert:
batch.add(insert_user, (name, age))
session.execute(batch)
You can also mix different types of operations within a batch:
batch = BatchStatement()
batch.add(SimpleStatement("INSERT INTO users (name, age) VALUES (%s, %s)"), (name, age))
batch.add(SimpleStatement("DELETE FROM pending_users WHERE name=%s"), (name,))
session.execute(batch)
Added in version 2.0.0.
Changed in version 2.1.0: Added serial_consistency_level as a parameter
Changed in version 2.6.0: Added custom_payload as a parameter
Adds a Statement
and optional sequence of parameters
to be used with the statement to the batch.
Like with other statements, parameters must be a sequence, even if there is only one item.
Adds a sequence of Statement
objects and a matching sequence
of parameters to the batch. Statement and parameter sequences must be of equal length or
one will be truncated. None
can be used in the parameters position where are needed.
This is a convenience method to clear a batch statement for reuse.
Note: it should not be used concurrently with uncompleted execution futures executing the same
BatchStatement
.
A BatchType is used with BatchStatement
instances to control
the atomicity of the batch operation.
Added in version 2.0.0.
A wrapper class that is used to specify that a sequence of values should
be treated as a CQL list of values instead of a single column collection when used
as part of the parameters argument for Session.execute()
.
This is typically needed when supplying a list of keys to select. For example:
>>> my_user_ids = ('alice', 'bob', 'charles')
>>> query = "SELECT * FROM users WHERE user_id IN %s"
>>> session.execute(query, parameters=[ValueSequence(my_user_ids)])
A trace of the duration and events that occurred when executing an operation.
Retrieves the actual tracing details from Cassandra and populates the
attributes of this instance. Because tracing details are stored
asynchronously by Cassandra, this may need to retry the session
detail fetch. If the trace is still not available after max_wait
seconds, TraceUnavailable
will be raised; if max_wait is
None
, this will retry forever.
wait_for_complete=False bypasses the wait for duration to be populated. This can be used to query events from partial sessions.
query_cl specifies a consistency level to use for polling the trace tables, if it should be different than the session default.
Representation of a single event within a query trace.
Raised when complete trace details cannot be fetched from Cassandra.
Was this page helpful?
ScyllaDB Python Driver is available under the Apache v2 License. ScyllaDB Python Driver is a fork of DataStax Python Driver. See Copyright here.
cassandra
- Exceptions and Enumscassandra.cluster
- Clusters and Sessionscassandra.policies
- Load balancing and Failure Handling Policiescassandra.auth
- Authenticationcassandra.metadata
- Schema and Ring Topologycassandra.metrics
- Performance Metricscassandra.query
- Prepared Statements, Batch Statements, Tracing, and Row Factoriescassandra.pool
- Hosts and Connection Poolscassandra.protocol
- Protocol Featurescassandra.encoder
- Encoders for non-prepared Statementscassandra.decoder
- Data Return Formatscassandra.concurrent
- Utilities for Concurrent Statement Executioncassandra.connection
- Low Level Connection Infocassandra.util
- Utilitiescassandra.timestamps
- Timestamp Generationcassandra.io.asyncioreactor
- asyncio
Event Loopcassandra.io.asyncorereactor
- asyncore
Event Loopcassandra.io.eventletreactor
- eventlet
-compatible Connectioncassandra.io.libevreactor
- libev
Event Loopcassandra.io.geventreactor
- gevent
-compatible Event Loopcassandra.io.twistedreactor
- Twisted Event Loopcassandra.cqlengine.models
- Table models for object mappingcassandra.cqlengine.columns
- Column types for object mapping modelscassandra.cqlengine.query
- Query and filter model objectscassandra.cqlengine.connection
- Connection management for cqlenginecassandra.cqlengine.management
- Schema management for cqlenginecassandra.cqlengine.usertype
- Model classes for User Defined TypesOn this page
cassandra.query
- Prepared Statements, Batch Statements, Tracing, and Row Factories