SQLAlchemy 2.1 Documentation
SQLAlchemy ORM
- ORM Quick Start
- ORM Mapped Class Configuration
- ORM Mapped Class Overview
- Mapping Classes with Declarative
- Declarative Mapping Styles
- Table Configuration with Declarative¶
- Declarative Table with
mapped_column()
- Using Annotated Declarative Table (Type Annotated Forms for
mapped_column()
)mapped_column()
derives the datatype and nullability from theMapped
annotation- Customizing the Type Map
- Mapping Multiple Type Configurations to Python Types
- Mapping Whole Column Declarations to Python Types
- Using Python
Enum
or pep-586Literal
types in the type map - Dataclass features in
mapped_column()
- Accessing Table and Metadata
- Declarative Table Configuration
- Explicit Schema Name with Declarative Table
- Setting Load and Persistence Options for Declarative Mapped Columns
- Naming Declarative Mapped Columns Explicitly
- Appending additional columns to an existing Declarative mapped class
- Using Annotated Declarative Table (Type Annotated Forms for
- Declarative with Imperative Table (a.k.a. Hybrid Declarative)
- Mapping Declaratively with Reflected Tables
- Declarative Table with
- Mapper Configuration with Declarative
- Composing Mapped Hierarchies with Mixins
- Integration with dataclasses and attrs
- SQL Expressions as Mapped Attributes
- Changing Attribute Behavior
- Composite Column Types
- Mapping Class Inheritance Hierarchies
- Non-Traditional Mappings
- Configuring a Version Counter
- Class Mapping API
- Mapping SQL Expressions
- Relationship Configuration
- ORM Querying Guide
- Using the Session
- Events and Internals
- ORM Extensions
- ORM Examples
Project Versions
- Previous: Declarative Mapping Styles
- Next: Mapper Configuration with Declarative
- Up: Home
- On this page:
- Table Configuration with Declarative
- Declarative Table with
mapped_column()
- Using Annotated Declarative Table (Type Annotated Forms for
mapped_column()
)mapped_column()
derives the datatype and nullability from theMapped
annotation- Customizing the Type Map
- Mapping Multiple Type Configurations to Python Types
- Mapping Whole Column Declarations to Python Types
- Using Python
Enum
or pep-586Literal
types in the type map - Dataclass features in
mapped_column()
- Accessing Table and Metadata
- Declarative Table Configuration
- Explicit Schema Name with Declarative Table
- Setting Load and Persistence Options for Declarative Mapped Columns
- Naming Declarative Mapped Columns Explicitly
- Appending additional columns to an existing Declarative mapped class
- Using Annotated Declarative Table (Type Annotated Forms for
- Declarative with Imperative Table (a.k.a. Hybrid Declarative)
- Mapping Declaratively with Reflected Tables
- Declarative Table with
Table Configuration with Declarative¶
As introduced at Declarative Mapping, the Declarative style
includes the ability to generate a mapped Table
object
at the same time, or to accommodate a Table
or other
FromClause
object directly.
The following examples assume a declarative base class as:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
All of the examples that follow illustrate a class inheriting from the above
Base
. The decorator style introduced at Declarative Mapping using a Decorator (no declarative base)
is fully supported with all the following examples as well, as are legacy
forms of Declarative Base including base classes generated by
declarative_base()
.
Declarative Table with mapped_column()
¶
When using Declarative, the body of the class to be mapped in most cases
includes an attribute __tablename__
that indicates the string name of a
Table
that should be generated along with the mapping. The
mapped_column()
construct, which features additional ORM-specific
configuration capabilities not present in the plain Column
class, is then used within the class body to indicate columns in the table. The
example below illustrates the most basic use of this construct within a
Declarative mapping:
from sqlalchemy import Integer, String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id = mapped_column(Integer, primary_key=True)
name = mapped_column(String(50), nullable=False)
fullname = mapped_column(String)
nickname = mapped_column(String(30))
Above, mapped_column()
constructs are placed inline within the class
definition as class level attributes. At the point at which the class is
declared, the Declarative mapping process will generate a new
Table
object against the MetaData
collection
associated with the Declarative Base
; each instance of
mapped_column()
will then be used to generate a
Column
object during this process, which will become part of
the Table.columns
collection of this Table
object.
In the above example, Declarative will build a Table
construct that is equivalent to the following:
# equivalent Table object produced
user_table = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String(50)),
Column("fullname", String()),
Column("nickname", String(30)),
)
When the User
class above is mapped, this Table
object
can be accessed directly via the __table__
attribute; this is described
further at Accessing Table and Metadata.
The mapped_column()
construct accepts all arguments that are
accepted by the Column
construct, as well as additional
ORM-specific arguments. The mapped_column.__name
field,
indicating the name of the database column, is typically omitted, as the
Declarative process will make use of the attribute name given to the construct
and assign this as the name of the column (in the above example, this refers to
the names id
, name
, fullname
, nickname
). Assigning an alternate
mapped_column.__name
is valid as well, where the resulting
Column
will use the given name in SQL and DDL statements,
while the User
mapped class will continue to allow access to the attribute
using the attribute name given, independent of the name given to the column
itself (more on this at Naming Declarative Mapped Columns Explicitly).
Tip
The mapped_column()
construct is only valid within a
Declarative class mapping. When constructing a Table
object using Core as well as when using
imperative table configuration,
the Column
construct is still required in order to
indicate the presence of a database column.
See also
Mapping Table Columns - contains additional notes on affecting
how Mapper
interprets incoming Column
objects.
Using Annotated Declarative Table (Type Annotated Forms for mapped_column()
)¶
The mapped_column()
construct is capable of deriving its column-configuration
information from PEP 484 type annotations associated with the attribute
as declared in the Declarative mapped class. These type annotations,
if used, must
be present within a special SQLAlchemy type called Mapped
, which
is a generic type that then indicates a specific Python type within it.
Below illustrates the mapping from the previous section, adding the use of
Mapped
:
from typing import Optional
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(50))
fullname: Mapped[Optional[str]]
nickname: Mapped[Optional[str]] = mapped_column(String(30))
Above, when Declarative processes each class attribute, each
mapped_column()
will derive additional arguments from the
corresponding Mapped
type annotation on the left side, if
present. Additionally, Declarative will generate an empty
mapped_column()
directive implicitly, whenever a
Mapped
type annotation is encountered that does not have
a value assigned to the attribute (this form is inspired by the similar
style used in Python dataclasses); this mapped_column()
construct
proceeds to derive its configuration from the Mapped
annotation present.
mapped_column()
derives the datatype and nullability from the Mapped
annotation¶
The two qualities that mapped_column()
derives from the
Mapped
annotation are:
datatype - the Python type given inside
Mapped
, as contained within thetyping.Optional
construct if present, is associated with aTypeEngine
subclass such asInteger
,String
,DateTime
, orUuid
, to name a few common types.The datatype is determined based on a dictionary of Python type to SQLAlchemy datatype. This dictionary is completely customizable, as detailed in the next section Customizing the Type Map. The default type map is implemented as in the code example below:
from typing import Any from typing import Dict from typing import Type import datetime import decimal import uuid from sqlalchemy import types # default type mapping, deriving the type for mapped_column() # from a Mapped[] annotation type_map: Dict[Type[Any], TypeEngine[Any]] = { bool: types.Boolean(), bytes: types.LargeBinary(), datetime.date: types.Date(), datetime.datetime: types.DateTime(), datetime.time: types.Time(), datetime.timedelta: types.Interval(), decimal.Decimal: types.Numeric(), float: types.Float(), int: types.Integer(), str: types.String(), uuid.UUID: types.Uuid(), }
If the
mapped_column()
construct indicates an explicit type as passed to themapped_column.__type
argument, then the given Python type is disregarded.nullability - The
mapped_column()
construct will indicate itsColumn
asNULL
orNOT NULL
first and foremost by the presence of themapped_column.nullable
parameter, passed either asTrue
orFalse
. Additionally , if themapped_column.primary_key
parameter is present and set toTrue
, that will also imply that the column should beNOT NULL
.In the absence of both of these parameters, the presence of
typing.Optional[]
within theMapped
type annotation will be used to determine nullability, wheretyping.Optional[]
meansNULL
, and the absence oftyping.Optional[]
meansNOT NULL
. If there is noMapped[]
annotation present at all, and there is nomapped_column.nullable
ormapped_column.primary_key
parameter, then SQLAlchemy’s usual default forColumn
ofNULL
is used.In the example below, the
id
anddata
columns will beNOT NULL
, and theadditional_info
column will beNULL
:from typing import Optional from sqlalchemy.orm import DeclarativeBase from sqlalchemy.orm import Mapped from sqlalchemy.orm import mapped_column class Base(DeclarativeBase): pass class SomeClass(Base): __tablename__ = "some_table" # primary_key=True, therefore will be NOT NULL id: Mapped[int] = mapped_column(primary_key=True) # not Optional[], therefore will be NOT NULL data: Mapped[str] # Optional[], therefore will be NULL additional_info: Mapped[Optional[str]]
It is also perfectly valid to have a
mapped_column()
whose nullability is different from what would be implied by the annotation. For example, an ORM mapped attribute may be annotated as allowingNone
within Python code that works with the object as it is first being created and populated, however the value will ultimately be written to a database column that isNOT NULL
. Themapped_column.nullable
parameter, when present, will always take precedence:class SomeClass(Base): # ... # will be String() NOT NULL, but can be None in Python data: Mapped[Optional[str]] = mapped_column(nullable=False)
Similarly, a non-None attribute that’s written to a database column that for whatever reason needs to be NULL at the schema level,
mapped_column.nullable
may be set toTrue
:class SomeClass(Base): # ... # will be String() NULL, but type checker will not expect # the attribute to be None data: Mapped[str] = mapped_column(nullable=True)
Customizing the Type Map¶
The mapping of Python types to SQLAlchemy TypeEngine
types
described in the previous section defaults to a hardcoded dictionary
present in the sqlalchemy.sql.sqltypes
module. However, the registry
object that coordinates the Declarative mapping process will first consult
a local, user defined dictionary of types which may be passed
as the registry.type_annotation_map
parameter when
constructing the registry
, which may be associated with
the DeclarativeBase
superclass when first used.
As an example, if we wish to make use of the BIGINT
datatype for
int
, the TIMESTAMP
datatype with timezone=True
for
datetime.datetime
, and then only on Microsoft SQL Server we’d like to use
NVARCHAR
datatype when Python str
is used,
the registry and Declarative base could be configured as:
import datetime
from sqlalchemy import BIGINT, Integer, NVARCHAR, String, TIMESTAMP
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped, mapped_column, registry
class Base(DeclarativeBase):
type_annotation_map = {
int: BIGINT,
datetime.datetime: TIMESTAMP(timezone=True),
str: String().with_variant(NVARCHAR, "mssql"),
}
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
date: Mapped[datetime.datetime]
status: Mapped[str]
Below illustrates the CREATE TABLE statement generated for the above mapping,
first on the Microsoft SQL Server backend, illustrating the NVARCHAR
datatype:
>>> from sqlalchemy.schema import CreateTable
>>> from sqlalchemy.dialects import mssql, postgresql
>>> print(CreateTable(SomeClass.__table__).compile(dialect=mssql.dialect()))
CREATE TABLE some_table (
id BIGINT NOT NULL IDENTITY,
date TIMESTAMP NOT NULL,
status NVARCHAR(max) NOT NULL,
PRIMARY KEY (id)
)
Then on the PostgreSQL backend, illustrating TIMESTAMP WITH TIME ZONE
:
>>> print(CreateTable(SomeClass.__table__).compile(dialect=postgresql.dialect()))
CREATE TABLE some_table (
id BIGSERIAL NOT NULL,
date TIMESTAMP WITH TIME ZONE NOT NULL,
status VARCHAR NOT NULL,
PRIMARY KEY (id)
)
By making use of methods such as TypeEngine.with_variant()
, we’re able
to build up a type map that’s customized to what we need for different backends,
while still being able to use succinct annotation-only mapped_column()
configurations. There are two more levels of Python-type configurability
available beyond this, described in the next two sections.
Mapping Multiple Type Configurations to Python Types¶
As individual Python types may be associated with TypeEngine
configurations of any variety by using the registry.type_annotation_map
parameter, an additional
capability is the ability to associate a single Python type with different
variants of a SQL type based on additional type qualifiers. One typical
example of this is mapping the Python str
datatype to VARCHAR
SQL types of different lengths. Another is mapping different varieties of
decimal.Decimal
to differently sized NUMERIC
columns.
Python’s typing system provides a great way to add additional metadata to a
Python type which is by using the PEP 593 Annotated
generic type, which
allows additional information to be bundled along with a Python type. The
mapped_column()
construct will correctly interpret an Annotated
object by identity when resolving it in the
registry.type_annotation_map
, as in the example below where we
declare two variants of String
and Numeric
:
from decimal import Decimal
from typing_extensions import Annotated
from sqlalchemy import Numeric
from sqlalchemy import String
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import registry
str_30 = Annotated[str, 30]
str_50 = Annotated[str, 50]
num_12_4 = Annotated[Decimal, 12]
num_6_2 = Annotated[Decimal, 6]
class Base(DeclarativeBase):
registry = registry(
type_annotation_map={
str_30: String(30),
str_50: String(50),
num_12_4: Numeric(12, 4),
num_6_2: Numeric(6, 2),
}
)
The Python type passed to the Annotated
container, in the above example the
str
and Decimal
types, is important only for the benefit of typing
tools; as far as the mapped_column()
construct is concerned, it will only need
perform a lookup of each type object in the
registry.type_annotation_map
dictionary without actually
looking inside of the Annotated
object, at least in this particular
context. Similarly, the arguments passed to Annotated
beyond the underlying
Python type itself are also not important, it’s only that at least one argument
must be present for the Annotated
construct to be valid. We can then use
these augmented types directly in our mapping where they will be matched to the
more specific type constructions, as in the following example:
class SomeClass(Base):
__tablename__ = "some_table"
short_name: Mapped[str_30] = mapped_column(primary_key=True)
long_name: Mapped[str_50]
num_value: Mapped[num_12_4]
short_num_value: Mapped[num_6_2]
a CREATE TABLE for the above mapping will illustrate the different variants
of VARCHAR
and NUMERIC
we’ve configured, and looks like:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
short_name VARCHAR(30) NOT NULL,
long_name VARCHAR(50) NOT NULL,
num_value NUMERIC(12, 4) NOT NULL,
short_num_value NUMERIC(6, 2) NOT NULL,
PRIMARY KEY (short_name)
)
While variety in linking Annotated
types to different SQL types grants
us a wide degree of flexibility, the next section illustrates a second
way in which Annotated
may be used with Declarative that is even
more open ended.
Mapping Whole Column Declarations to Python Types¶
The previous section illustrated using PEP 593 Annotated
type
instances as keys within the registry.type_annotation_map
dictionary. In this form, the mapped_column()
construct does not
actually look inside the Annotated
object itself, it’s instead
used only as a dictionary key. However, Declarative also has the ability to extract
an entire pre-established mapped_column()
construct from
an Annotated
object directly. Using this form, we can define not only
different varieties of SQL datatypes linked to Python types without using
the registry.type_annotation_map
dictionary, we can also
set up any number of arguments such as nullability, column defaults,
and constraints in a reusable fashion.
A set of ORM models will usually have some kind of primary
key style that is common to all mapped classes. There also may be
common column configurations such as timestamps with defaults and other fields of
pre-established sizes and configurations. We can compose these configurations
into mapped_column()
instances that we then bundle directly into
instances of Annotated
, which are then re-used in any number of class
declarations. Declarative will unpack an Annotated
object
when provided in this manner, skipping over any other directives that don’t
apply to SQLAlchemy and searching only for SQLAlchemy ORM constructs.
The example below illustrates a variety of pre-configured field types used
in this way, where we define intpk
that represents an Integer
primary
key column, timestamp
that represents a DateTime
type
which will use CURRENT_TIMESTAMP
as a DDL level column default,
and required_name
which is a String
of length 30 that’s
NOT NULL
:
import datetime
from typing_extensions import Annotated
from sqlalchemy import func
from sqlalchemy import String
from sqlalchemy.orm import mapped_column
intpk = Annotated[int, mapped_column(primary_key=True)]
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False, server_default=func.CURRENT_TIMESTAMP()),
]
required_name = Annotated[str, mapped_column(String(30), nullable=False)]
The above Annotated
objects can then be used directly within
Mapped
, where the pre-configured mapped_column()
constructs will be extracted and copied to a new instance that will be
specific to each attribute:
class Base(DeclarativeBase):
pass
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[intpk]
name: Mapped[required_name]
created_at: Mapped[timestamp]
CREATE TABLE
for our above mapping looks like:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
name VARCHAR(30) NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
PRIMARY KEY (id)
)
When using Annotated
types in this way, the configuration of the type
may also be affected on a per-attribute basis. For the types in the above
example that feature explicit use of mapped_column.nullable
,
we can apply the Optional[]
generic modifier to any of our types so that
the field is optional or not at the Python level, which will be independent
of the NULL
/ NOT NULL
setting that takes place in the database:
from typing_extensions import Annotated
import datetime
from typing import Optional
from sqlalchemy.orm import DeclarativeBase
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False),
]
class Base(DeclarativeBase):
pass
class SomeClass(Base):
# ...
# pep-484 type will be Optional, but column will be
# NOT NULL
created_at: Mapped[Optional[timestamp]]
The mapped_column()
construct is also reconciled with an explicitly
passed mapped_column()
construct, whose arguments will take precedence
over those of the Annotated
construct. Below we add a ForeignKey
constraint to our integer primary key and also use an alternate server
default for the created_at
column:
import datetime
from typing_extensions import Annotated
from sqlalchemy import ForeignKey
from sqlalchemy import func
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.schema import CreateTable
intpk = Annotated[int, mapped_column(primary_key=True)]
timestamp = Annotated[
datetime.datetime,
mapped_column(nullable=False, server_default=func.CURRENT_TIMESTAMP()),
]
class Base(DeclarativeBase):
pass
class Parent(Base):
__tablename__ = "parent"
id: Mapped[intpk]
class SomeClass(Base):
__tablename__ = "some_table"
# add ForeignKey to mapped_column(Integer, primary_key=True)
id: Mapped[intpk] = mapped_column(ForeignKey("parent.id"))
# change server default from CURRENT_TIMESTAMP to UTC_TIMESTAMP
created_at: Mapped[timestamp] = mapped_column(server_default=func.UTC_TIMESTAMP())
The CREATE TABLE statement illustrates these per-attribute settings,
adding a FOREIGN KEY
constraint as well as substituting
UTC_TIMESTAMP
for CURRENT_TIMESTAMP
:
>>> from sqlalchemy.schema import CreateTable
>>> print(CreateTable(SomeClass.__table__))
CREATE TABLE some_table (
id INTEGER NOT NULL,
created_at DATETIME DEFAULT UTC_TIMESTAMP() NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(id) REFERENCES parent (id)
)
Note
The feature of mapped_column()
just described, where
a fully constructed set of column arguments may be indicated using
PEP 593 Annotated
objects that contain a “template”
mapped_column()
object to be copied into the attribute, is
currently not implemented for other ORM constructs such as
relationship()
and composite()
. While this functionality
is in theory possible, for the moment attempting to use Annotated
to indicate further arguments for relationship()
and similar
will raise a NotImplementedError
exception at runtime, but
may be implemented in future releases.
Using Python Enum
or pep-586 Literal
types in the type map¶
New in version 2.0.0b4: - Added Enum
support
New in version 2.0.1: - Added Literal
support
User-defined Python types which derive from the Python built-in enum.Enum
as well as the typing.Literal
class are automatically linked to the SQLAlchemy Enum
datatype
when used in an ORM declarative mapping. The example below uses
a custom enum.Enum
within the Mapped[]
constructor:
import enum
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class Status(enum.Enum):
PENDING = "pending"
RECEIVED = "received"
COMPLETED = "completed"
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status]
In the above example, the mapped attribute SomeClass.status
will be
linked to a Column
with the datatype of Enum(Status)
.
We can see this for example in the CREATE TABLE output for the PostgreSQL
database:
CREATE TYPE status AS ENUM ('PENDING', 'RECEIVED', 'COMPLETED')
CREATE TABLE some_table (
id SERIAL NOT NULL,
status status NOT NULL,
PRIMARY KEY (id)
)
In a similar way, typing.Literal
may be used instead, using
a typing.Literal
that consists of all strings:
from typing import Literal
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
Status = Literal["pending", "received", "completed"]
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status]
The entries used in registry.type_annotation_map
link the base
enum.Enum
Python type as well as the typing.Literal
type to the
SQLAlchemy Enum
SQL type, using a special form which indicates to the
Enum
datatype that it should automatically configure itself against
an arbitrary enumerated type. This configuration, which is implicit by default,
would be indicated explicitly as:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
type_annotation_map = {
enum.Enum: sqlalchemy.Enum(enum.Enum),
typing.Literal: sqlalchemy.Enum(enum.Enum),
}
The resolution logic within Declarative is able to resolve subclasses
of enum.Enum
as well as instances of typing.Literal
to match the
enum.Enum
or typing.Literal
entry in the
registry.type_annotation_map
dictionary. The Enum
SQL type then knows how to produce a configured version of itself with the
appropriate settings, including default string length. If a typing.Literal
that does not consist of only string values is passed, an informative
error is raised.
Native Enums and Naming¶
The Enum.native_enum
parameter refers to if the
Enum
datatype should create a so-called “native”
enum, which on MySQL/MariaDB is the ENUM
datatype and on PostgreSQL is
a new TYPE
object created by CREATE TYPE
, or a “non-native” enum,
which means that VARCHAR
will be used to create the datatype. For
backends other than MySQL/MariaDB or PostgreSQL, VARCHAR
is used in
all cases (third party dialects may have their own behaviors).
Because PostgreSQL’s CREATE TYPE
requires that there’s an explicit name
for the type to be created, special fallback logic exists when working
with implicitly generated Enum
without specifying an
explicit Enum
datatype within a mapping:
If the
Enum
is linked to anenum.Enum
object, theEnum.native_enum
parameter defaults toTrue
and the name of the enum will be taken from the name of theenum.Enum
datatype. The PostgreSQL backend will assumeCREATE TYPE
with this name.If the
Enum
is linked to atyping.Literal
object, theEnum.native_enum
parameter defaults toFalse
; no name is generated andVARCHAR
is assumed.
To use typing.Literal
with a PostgreSQL CREATE TYPE
type, an
explicit Enum
must be used, either within the
type map:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
Status = Literal["pending", "received", "completed"]
class Base(DeclarativeBase):
type_annotation_map = {
Status: sqlalchemy.Enum("pending", "received", "completed", name="status_enum"),
}
Or alternatively within mapped_column()
:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
Status = Literal["pending", "received", "completed"]
class Base(DeclarativeBase):
pass
class SomeClass(Base):
__tablename__ = "some_table"
id: Mapped[int] = mapped_column(primary_key=True)
status: Mapped[Status] = mapped_column(
sqlalchemy.Enum("pending", "received", "completed", name="status_enum")
)
Altering the Configuration of the Default Enum¶
In order to modify the fixed configuration of the Enum
datatype
that’s generated implicitly, specify new entries in the
registry.type_annotation_map
, indicating additional arguments.
For example, to use “non native enumerations” unconditionally, the
Enum.native_enum
parameter may be set to False for all types:
import enum
import typing
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
type_annotation_map = {
enum.Enum: sqlalchemy.Enum(enum.Enum, native_enum=False),
typing.Literal: sqlalchemy.Enum(enum.Enum, native_enum=False),
}
Changed in version 2.0.1: Implemented support for overriding parameters
such as Enum.native_enum
within the
Enum
datatype when establishing the
registry.type_annotation_map
. Previously, this
functionality was not working.
To use a specific configuration for a specific enum.Enum
subtype, such
as setting the string length to 50 when using the example Status
datatype:
import enum
import sqlalchemy
from sqlalchemy.orm import DeclarativeBase
class Status(enum.Enum):
PENDING = "pending"
RECEIVED = "received"
COMPLETED = "completed"
class Base(DeclarativeBase):
type_annotation_map = {
Status: sqlalchemy.Enum(Status, length=50, native_enum=False)
}
By default Enum
that are automatically generated are not
associated with the MetaData
instance used by the Base
, so if
the metadata defines a schema it will not be automatically associated with the
enum. To automatically associate the enum with the schema in the metadata or
table they belong to the Enum.inherit_schema
can be set:
from enum import Enum
import sqlalchemy as sa
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
metadata = sa.MetaData(schema="my_schema")
type_annotation_map = {Enum: sa.Enum(Enum, inherit_schema=True)}
Linking Specific enum.Enum
or typing.Literal
to other datatypes¶
The above examples feature the use of an Enum
that is
automatically configuring itself to the arguments / attributes present on
an enum.Enum
or typing.Literal
type object. For use cases where
specific kinds of enum.Enum
or typing.Literal
should be linked to
other types, these specific types may be placed in the type map also.
In the example below, an entry for Literal[]
that contains non-string
types is linked to the JSON
datatype:
from typing import Literal
from sqlalchemy import JSON
from sqlalchemy.orm import DeclarativeBase
my_literal = Literal[0, 1, True, False, "true", "false"]
class Base(DeclarativeBase):
type_annotation_map = {my_literal: JSON}
In the above configuration, the my_literal
datatype will resolve to a
JSON
instance. Other Literal
variants will continue
to resolve to Enum
datatypes.
Dataclass features in mapped_column()
¶
The mapped_column()
construct integrates with SQLAlchemy’s
“native dataclasses” feature, discussed at
Declarative Dataclass Mapping. See that section for current
background on additional directives supported by mapped_column()
.
Accessing Table and Metadata¶
A declaratively mapped class will always include an attribute called
__table__
; when the above configuration using __tablename__
is
complete, the declarative process makes the Table
available via the __table__
attribute:
# access the Table
user_table = User.__table__
The above table is ultimately the same one that corresponds to the
Mapper.local_table
attribute, which we can see through the
runtime inspection system:
from sqlalchemy import inspect
user_table = inspect(User).local_table
The MetaData
collection associated with both the declarative
registry
as well as the base class is frequently necessary in
order to run DDL operations such as CREATE, as well as in use with migration
tools such as Alembic. This object is available via the .metadata
attribute of registry
as well as the declarative base class.
Below, for a small script we may wish to emit a CREATE for all tables against a
SQLite database:
engine = create_engine("sqlite://")
Base.metadata.create_all(engine)
Declarative Table Configuration¶
When using Declarative Table configuration with the __tablename__
declarative class attribute, additional arguments to be supplied to the
Table
constructor should be provided using the
__table_args__
declarative class attribute.
This attribute accommodates both positional as well as keyword
arguments that are normally sent to the
Table
constructor.
The attribute can be specified in one of two forms. One is as a
dictionary:
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = {"mysql_engine": "InnoDB"}
The other, a tuple, where each argument is positional (usually constraints):
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = (
ForeignKeyConstraint(["id"], ["remote_table.id"]),
UniqueConstraint("foo"),
)
Keyword arguments can be specified with the above form by specifying the last argument as a dictionary:
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = (
ForeignKeyConstraint(["id"], ["remote_table.id"]),
UniqueConstraint("foo"),
{"autoload": True},
)
A class may also specify the __table_args__
declarative attribute,
as well as the __tablename__
attribute, in a dynamic style using the
declared_attr()
method decorator. See
Composing Mapped Hierarchies with Mixins for background.
Explicit Schema Name with Declarative Table¶
The schema name for a Table
as documented at
Specifying the Schema Name is applied to an individual Table
using the Table.schema
argument. When using Declarative
tables, this option is passed like any other to the __table_args__
dictionary:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class MyClass(Base):
__tablename__ = "sometable"
__table_args__ = {"schema": "some_schema"}
The schema name can also be applied to all Table
objects
globally by using the MetaData.schema
parameter documented
at Specifying a Default Schema Name with MetaData. The MetaData
object
may be constructed separately and associated with a DeclarativeBase
subclass by assigning to the metadata
attribute directly:
from sqlalchemy import MetaData
from sqlalchemy.orm import DeclarativeBase
metadata_obj = MetaData(schema="some_schema")
class Base(DeclarativeBase):
metadata = metadata_obj
class MyClass(Base):
# will use "some_schema" by default
__tablename__ = "sometable"
See also
Specifying the Schema Name - in the Describing Databases with MetaData documentation.
Setting Load and Persistence Options for Declarative Mapped Columns¶
The mapped_column()
construct accepts additional ORM-specific
arguments that affect how the generated Column
is
mapped, affecting its load and persistence-time behavior. Options
that are commonly used include:
deferred column loading - The
mapped_column.deferred
boolean establishes theColumn
using deferred column loading by default. In the example below, theUser.bio
column will not be loaded by default, but only when accessed:class User(Base): __tablename__ = "user" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] bio: Mapped[str] = mapped_column(Text, deferred=True)
See also
Limiting which Columns Load with Column Deferral - full description of deferred column loading
active history - The
mapped_column.active_history
ensures that upon change of value for the attribute, the previous value will have been loaded and made part of theAttributeState.history
collection when inspecting the history of the attribute. This may incur additional SQL statements:class User(Base): __tablename__ = "user" id: Mapped[int] = mapped_column(primary_key=True) important_identifier: Mapped[str] = mapped_column(active_history=True)
See the docstring for mapped_column()
for a list of supported
parameters.
See also
Applying Load, Persistence and Mapping Options for Imperative Table Columns - describes using
column_property()
and deferred()
for use with
Imperative Table configuration
Naming Declarative Mapped Columns Explicitly¶
All of the examples thus far feature the mapped_column()
construct
linked to an ORM mapped attribute, where the Python attribute name given
to the mapped_column()
is also that of the column as we see in
CREATE TABLE statements as well as queries. The name for a column as
expressed in SQL may be indicated by passing the string positional argument
mapped_column.__name
as the first positional argument.
In the example below, the User
class is mapped with alternate names
given to the columns themselves:
class User(Base):
__tablename__ = "user"
id: Mapped[int] = mapped_column("user_id", primary_key=True)
name: Mapped[str] = mapped_column("user_name")
Where above User.id
resolves to a column named user_id
and User.name
resolves to a column named user_name
. We
may write a select()
statement using our Python attribute names
and will see the SQL names generated:
>>> from sqlalchemy import select
>>> print(select(User.id, User.name).where(User.name == "x"))
SELECT "user".user_id, "user".user_name
FROM "user"
WHERE "user".user_name = :user_name_1
See also
Alternate Attribute Names for Mapping Table Columns - applies to Imperative Table
Appending additional columns to an existing Declarative mapped class¶
A declarative table configuration allows the addition of new
Column
objects to an existing mapping after the Table
metadata has already been generated.
For a declarative class that is declared using a declarative base class,
the underlying metaclass DeclarativeMeta
includes a __setattr__()
method that will intercept additional mapped_column()
or Core
Column
objects and
add them to both the Table
using Table.append_column()
as well as to the existing Mapper
using Mapper.add_property()
:
MyClass.some_new_column = mapped_column(String)
Using core Column
:
MyClass.some_new_column = Column(String)
All arguments are supported including an alternate name, such as
MyClass.some_new_column = mapped_column("some_name", String)
. However,
the SQL type must be passed to the mapped_column()
or
Column
object explicitly, as in the above examples where
the String
type is passed. There’s no capability for
the Mapped
annotation type to take part in the operation.
Additional Column
objects may also be added to a mapping
in the specific circumstance of using single table inheritance, where
additional columns are present on mapped subclasses that have
no Table
of their own. This is illustrated in the section
Single Table Inheritance.
See also
Adding Relationships to Mapped Classes After Declaration - similar examples for relationship()
Note
Assignment of mapped
properties to an already mapped class will only
function correctly if the “declarative base” class is used, meaning
the user-defined subclass of DeclarativeBase
or the
dynamically generated class returned by declarative_base()
or registry.generate_base()
. This “base” class includes
a Python metaclass which implements a special __setattr__()
method
that intercepts these operations.
Runtime assignment of class-mapped attributes to a mapped class will not work
if the class is mapped using decorators like registry.mapped()
or imperative functions like registry.map_imperatively()
.
Declarative with Imperative Table (a.k.a. Hybrid Declarative)¶
Declarative mappings may also be provided with a pre-existing
Table
object, or otherwise a Table
or other
arbitrary FromClause
construct (such as a Join
or Subquery
) that is constructed separately.
This is referred to as a “hybrid declarative”
mapping, as the class is mapped using the declarative style for everything
involving the mapper configuration, however the mapped Table
object is produced separately and passed to the declarative process
directly:
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
# construct a Table directly. The Base.metadata collection is
# usually a good choice for MetaData but any MetaData
# collection may be used.
user_table = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Column("fullname", String),
Column("nickname", String),
)
# construct the User class using this table.
class User(Base):
__table__ = user_table
Above, a Table
object is constructed using the approach
described at Describing Databases with MetaData. It can then be applied directly
to a class that is declaratively mapped. The __tablename__
and
__table_args__
declarative class attributes are not used in this form.
The above configuration is often more readable as an inline definition:
class User(Base):
__table__ = Table(
"user",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String),
Column("fullname", String),
Column("nickname", String),
)
A natural effect of the above style is that the __table__
attribute is
itself defined within the class definition block. As such it may be
immediately referenced within subsequent attributes, such as the example
below which illustrates referring to the type
column in a polymorphic
mapper configuration:
class Person(Base):
__table__ = Table(
"person",
Base.metadata,
Column("id", Integer, primary_key=True),
Column("name", String(50)),
Column("type", String(50)),
)
__mapper_args__ = {
"polymorphic_on": __table__.c.type,
"polymorhpic_identity": "person",
}
The “imperative table” form is also used when a non-Table
construct, such as a Join
or Subquery
object,
is to be mapped. An example below:
from sqlalchemy import func, select
subq = (
select(
func.count(orders.c.id).label("order_count"),
func.max(orders.c.price).label("highest_order"),
orders.c.customer_id,
)
.group_by(orders.c.customer_id)
.subquery()
)
customer_select = (
select(customers, subq)
.join_from(customers, subq, customers.c.id == subq.c.customer_id)
.subquery()
)
class Customer(Base):
__table__ = customer_select
For background on mapping to non-Table
constructs see
the sections Mapping a Class against Multiple Tables and Mapping a Class against Arbitrary Subqueries.
The “imperative table” form is of particular use when the class itself is using an alternative form of attribute declaration, such as Python dataclasses. See the section Applying ORM Mappings to an existing dataclass (legacy dataclass use) for detail.
See also
Describing Databases with MetaData
Applying ORM Mappings to an existing dataclass (legacy dataclass use)
Alternate Attribute Names for Mapping Table Columns¶
The section Naming Declarative Mapped Columns Explicitly illustrated how to
use mapped_column()
to provide a specific name for the generated
Column
object separate from the attribute name under which
it is mapped.
When using Imperative Table configuration, we already have
Column
objects present. To map these to alternate names
we may assign the Column
to the desired attributes
directly:
user_table = Table(
"user",
Base.metadata,
Column("user_id", Integer, primary_key=True),
Column("user_name", String),
)
class User(Base):
__table__ = user_table
id = user_table.c.user_id
name = user_table.c.user_name
The User
mapping above will refer to the "user_id"
and "user_name"
columns via the User.id
and User.name
attributes, in the same
way as demonstrated at Naming Declarative Mapped Columns Explicitly.
One caveat to the above mapping is that the direct inline link to
Column
will not be typed correctly when using
PEP 484 typing tools. A strategy to resolve this is to apply the
Column
objects within the column_property()
function; while the Mapper
already generates this property
object for its internal use automatically, by naming it in the class
declaration, typing tools will be able to match the attribute to the
Mapped
annotation:
from sqlalchemy.orm import column_property
from sqlalchemy.orm import Mapped
class User(Base):
__table__ = user_table
id: Mapped[int] = column_property(user_table.c.user_id)
name: Mapped[str] = column_property(user_table.c.user_name)
See also
Naming Declarative Mapped Columns Explicitly - applies to Declarative Table
Applying Load, Persistence and Mapping Options for Imperative Table Columns¶
The section Setting Load and Persistence Options for Declarative Mapped Columns reviewed how to set load
and persistence options when using the mapped_column()
construct
with Declarative Table configuration. When using Imperative Table configuration,
we already have existing Column
objects that are mapped.
In order to map these Column
objects along with additional
parameters that are specific to the ORM mapping, we may use the
column_property()
and deferred()
constructs in order to
associate additional parameters with the column. Options include:
deferred column loading - The
deferred()
function is shorthand for invokingcolumn_property()
with thecolumn_property.deferred
parameter set toTrue
; this construct establishes theColumn
using deferred column loading by default. In the example below, theUser.bio
column will not be loaded by default, but only when accessed:from sqlalchemy.orm import deferred user_table = Table( "user", Base.metadata, Column("id", Integer, primary_key=True), Column("name", String), Column("bio", Text), ) class User(Base): __table__ = user_table bio = deferred(user_table.c.bio)
See also
Limiting which Columns Load with Column Deferral - full description of deferred column loading
active history - The
column_property.active_history
ensures that upon change of value for the attribute, the previous value will have been loaded and made part of theAttributeState.history
collection when inspecting the history of the attribute. This may incur additional SQL statements:from sqlalchemy.orm import deferred user_table = Table( "user", Base.metadata, Column("id", Integer, primary_key=True), Column("important_identifier", String), ) class User(Base): __table__ = user_table important_identifier = column_property( user_table.c.important_identifier, active_history=True )
See also
The column_property()
construct is also important for cases
where classes are mapped to alternative FROM clauses such as joins and
selects. More background on these cases is at:
For Declarative Table configuration with mapped_column()
,
most options are available directly; see the section
Setting Load and Persistence Options for Declarative Mapped Columns for examples.
Mapping Declaratively with Reflected Tables¶
There are several patterns available which provide for producing mapped
classes against a series of Table
objects that were
introspected from the database, using the reflection process described at
Reflecting Database Objects.
A simple way to map a class to a table reflected from the database is to
use a declarative hybrid mapping, passing the
Table.autoload_with
parameter to the constructor for
Table
:
from sqlalchemy import create_engine
from sqlalchemy import Table
from sqlalchemy.orm import DeclarativeBase
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
class Base(DeclarativeBase):
pass
class MyClass(Base):
__table__ = Table(
"mytable",
Base.metadata,
autoload_with=engine,
)
A variant on the above pattern that scales for many tables is to use the
MetaData.reflect()
method to reflect a full set of Table
objects at once, then refer to them from the MetaData
:
from sqlalchemy import create_engine
from sqlalchemy import Table
from sqlalchemy.orm import DeclarativeBase
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
class Base(DeclarativeBase):
pass
Base.metadata.reflect(engine)
class MyClass(Base):
__table__ = Base.metadata.tables["mytable"]
One caveat to the approach of using __table__
is that the mapped classes cannot
be declared until the tables have been reflected, which requires the database
connectivity source to be present while the application classes are being
declared; it’s typical that classes are declared as the modules of an
application are being imported, but database connectivity isn’t available
until the application starts running code so that it can consume configuration
information and create an engine. There are currently two approaches
to working around this, described in the next two sections.
Using DeferredReflection¶
To accommodate the use case of declaring mapped classes where reflection of
table metadata can occur afterwards, a simple extension called the
DeferredReflection
mixin is available, which alters the declarative
mapping process to be delayed until a special class-level
DeferredReflection.prepare()
method is called, which will perform
the reflection process against a target database, and will integrate the
results with the declarative table mapping process, that is, classes which
use the __tablename__
attribute:
from sqlalchemy.ext.declarative import DeferredReflection
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class Reflected(DeferredReflection):
__abstract__ = True
class Foo(Reflected, Base):
__tablename__ = "foo"
bars = relationship("Bar")
class Bar(Reflected, Base):
__tablename__ = "bar"
foo_id = mapped_column(Integer, ForeignKey("foo.id"))
Above, we create a mixin class Reflected
that will serve as a base
for classes in our declarative hierarchy that should become mapped when
the Reflected.prepare
method is called. The above mapping is not
complete until we do so, given an Engine
:
engine = create_engine("postgresql+psycopg2://user:pass@hostname/my_existing_database")
Reflected.prepare(engine)
The purpose of the Reflected
class is to define the scope at which
classes should be reflectively mapped. The plugin will search among the
subclass tree of the target against which .prepare()
is called and reflect
all tables which are named by declared classes; tables in the target database
that are not part of mappings and are not related to the target tables
via foreign key constraint will not be reflected.
Using Automap¶
A more automated solution to mapping against an existing database where table reflection is to be used is to use the Automap extension. This extension will generate entire mapped classes from a database schema, including relationships between classes based on observed foreign key constraints. While it includes hooks for customization, such as hooks that allow custom class naming and relationship naming schemes, automap is oriented towards an expedient zero-configuration style of working. If an application wishes to have a fully explicit model that makes use of table reflection, the DeferredReflection class may be preferable for its less automated approach.
See also
Automating Column Naming Schemes from Reflected Tables¶
When using any of the previous reflection techniques, we have the option
to change the naming scheme by which columns are mapped. The
Column
object includes a parameter
Column.key
which is a string name that determines
under what name
this Column
will be present in the Table.c
collection, independently of the SQL name of the column. This key is also
used by Mapper
as the attribute name under which the
Column
will be mapped, if not supplied through other
means such as that illustrated at Alternate Attribute Names for Mapping Table Columns.
When working with table reflection, we can intercept the parameters that
will be used for Column
as they are received using
the DDLEvents.column_reflect()
event and apply whatever
changes we need, including the .key
attribute but also things like
datatypes.
The event hook is most easily
associated with the MetaData
object that’s in use
as illustrated below:
from sqlalchemy import event
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
@event.listens_for(Base.metadata, "column_reflect")
def column_reflect(inspector, table, column_info):
# set column.key = "attr_<lower_case_name>"
column_info["key"] = "attr_%s" % column_info["name"].lower()
With the above event, the reflection of Column
objects will be intercepted
with our event that adds a new “.key” element, such as in a mapping as below:
class MyClass(Base):
__table__ = Table("some_table", Base.metadata, autoload_with=some_engine)
The approach also works with both the DeferredReflection
base class
as well as with the Automap extension. For automap
specifically, see the section Intercepting Column Definitions for
background.
See also
Mapping Declaratively with Reflected Tables
Intercepting Column Definitions - in the Automap documentation
Mapping to an Explicit Set of Primary Key Columns¶
The Mapper
construct in order to successfully map a table always
requires that at least one column be identified as the “primary key” for
that selectable. This is so that when an ORM object is loaded or persisted,
it can be placed in the identity map with an appropriate
identity key.
In those cases where the a reflected table to be mapped does not include
a primary key constraint, as well as in the general case for
mapping against arbitrary selectables
where primary key columns might not be present, the
Mapper.primary_key
parameter is provided so that any set of
columns may be configured as the “primary key” for the table, as far as
ORM mapping is concerned.
Given the following example of an Imperative Table
mapping against an existing Table
object where the table does not
have any declared primary key (as may occur in reflection scenarios), we may
map such a table as in the following example:
from sqlalchemy import Column
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import UniqueConstraint
from sqlalchemy.orm import DeclarativeBase
metadata = MetaData()
group_users = Table(
"group_users",
metadata,
Column("user_id", String(40), nullable=False),
Column("group_id", String(40), nullable=False),
UniqueConstraint("user_id", "group_id"),
)
class Base(DeclarativeBase):
pass
class GroupUsers(Base):
__table__ = group_users
__mapper_args__ = {"primary_key": [group_users.c.user_id, group_users.c.group_id]}
Above, the group_users
table is an association table of some kind
with string columns user_id
and group_id
, but no primary key is set up;
instead, there is only a UniqueConstraint
establishing that the
two columns represent a unique key. The Mapper
does not automatically
inspect unique constraints for primary keys; instead, we make use of the
Mapper.primary_key
parameter, passing a collection of
[group_users.c.user_id, group_users.c.group_id]
, indicating that these two
columns should be used in order to construct the identity key for instances
of the GroupUsers
class.
Mapping a Subset of Table Columns¶
Sometimes table reflection may provide a Table
with many
columns that are not important for our needs and may be safely ignored.
For such a table that has lots of columns that don’t need to be referenced
in the application, the Mapper.include_properties
or Mapper.exclude_properties
parameters can indicate
a subset of columns to be mapped, where other columns from the
target Table
will not be considered by the ORM in any
way. Example:
class User(Base):
__table__ = user_table
__mapper_args__ = {"include_properties": ["user_id", "user_name"]}
In the above example, the User
class will map to the user_table
table, only
including the user_id
and user_name
columns - the rest are not referenced.
Similarly:
class Address(Base):
__table__ = address_table
__mapper_args__ = {"exclude_properties": ["street", "city", "state", "zip"]}
will map the Address
class to the address_table
table, including
all columns present except street
, city
, state
, and zip
.
As indicated in the two examples, columns may be referenced either
by string name or by referring to the Column
object
directly. Referring to the object directly may be useful for explicitness as
well as to resolve ambiguities when
mapping to multi-table constructs that might have repeated names:
class User(Base):
__table__ = user_table
__mapper_args__ = {
"include_properties": [user_table.c.user_id, user_table.c.user_name]
}
When columns are not included in a mapping, these columns will not be
referenced in any SELECT statements emitted when executing select()
or legacy Query
objects, nor will there be any mapped attribute
on the mapped class which represents the column; assigning an attribute of that
name will have no effect beyond that of a normal Python attribute assignment.
However, it is important to note that schema level column defaults WILL
still be in effect for those Column
objects that include them,
even though they may be excluded from the ORM mapping.
“Schema level column defaults” refers to the defaults described at
Column INSERT/UPDATE Defaults including those configured by the
Column.default
, Column.onupdate
,
Column.server_default
and
Column.server_onupdate
parameters. These constructs
continue to have normal effects because in the case of
Column.default
and Column.onupdate
, the
Column
object is still present on the underlying
Table
, thus allowing the default functions to take place when
the ORM emits an INSERT or UPDATE, and in the case of
Column.server_default
and
Column.server_onupdate
, the relational database itself
emits these defaults as a server side behavior.
flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.
Created using Sphinx 7.2.6. Documentation last generated: Fri 08 Nov 2024 08:32:22 AM EST