Hands-On Docker for Microservices with Python
上QQ阅读APP看书,第一时间看更新

Working with SQLAlchemy

SQLAlchemy (https://www.sqlalchemy.org/) is a powerful Python module to work with SQL databases. There are two approaches to dealing with databases with a high-level language such as Python. One is keeping the low-level approach and doing raw SQL statements, retrieving the data as it is in the database. The other is to abstract the database using an Object-Relational Mapper (ORM) and use the interface without getting into the details of how it is implemented.

The first approach is well represented by the Python database API specification (PEP 249—https://www.python.org/dev/peps/pep-0249/), which is followed by all major databases, such as psycopg2 (http://initd.org/psycopg/) for PostgreSQL. This mainly creates SQL string commands, executes them, and then parses the results. This allows us to tailor each query, but it's not very productive for common operations that get repeated over and over. PonyORM (https://ponyorm.org/) is another example that's not so low level but still aims at replicating the SQL syntax and structure.

For the second approach, the best-known example is probably the Django ORM (https://docs.djangoproject.com/en/2.2/topics/db/). It abstracts the database access using defined model python objects. It works fantastically well for common operations, but its model assumes that the definition of the database is done in our Python code, and mapping legacy databases can be very painful. Some complex SQL operations created by the ORM can take a lot of time, while a custom-tailored query could save a lot of time. It's also easy to perform slow queries without even realizing, just because the tool abstracts us so much from the end result.

SQLAlchemy (https://www.sqlalchemy.org/) is quite flexible and can work on both ends of the spectrum. It's not as straightforward or as easy to use as the Django ORM, but it allows us to map existing databases into an ORM. This is why we will use it in our example: it can take an existing, complicated legacy database and map it, allowing you to perform simple operations easily and complicated operations in exactly the way you want.

Keep in mind that the operations we are going to be using in this book are quite simple and SQLAlchemy won't shine particularly in those tasks. But it's an invaluable tool if you're planning a complex migration from an old monolith that accesses the database through manually written SQL statements, to a newly created microservice. If you are already dealing with a complicated database, spending some time learning how to use SQLAlchemy will be invaluable. A well-tailored SQLAlchemy definition can perform some abstract tasks very efficiently, but it requires good knowledge of the tool.

The documentation for Flask-SQLAlchemy (https://flask-sqlalchemy.palletsprojects.com/en/2.x/) is a good place to start, as it summarizes the main operations, and the main SQLAlchemy documentation can be overwhelming at first.

After we define a model, we can perform a query by using the query attribute in the model and filter accordingly:

# Retrieve a single thought by its primary key
thought = ThoughtModel.query.get(thought_id)
# Retrieve all thoughts filtered by a username
thoughts = ThoughtModel.query.filter_by(username=username)
.order_by('id').all()

Storing and deleting a row requires the use of the session and then committing it:

# Create a new thought
new_thought = ThoughtModel(username=username, text=text, timestamp=datetime.utcnow())
db.session.add(new_thought)
db.session.commit()

# Retrieve and delete a thought
thought = ThoughtModel.query.get(thought_id)
db.session.delete(thought)
db.session.commit()

To see how to configure the database access, check the thoughts_backend/db.py file.