
Welcome to the Ultimate FastAPI tutorial series.This post is part 7. The series is a project-basedtutorial where we will build a cooking recipe API. Each post gradually adds more complex functionality, showcasing the capabilities of FastAPI, ending with a realistic, production-ready API. The series is designed to be followed in order, but if you already know FastAPI you can jump to the relevant part.
Project github repo directory for this part of the tutorial
async def andawait
This is the first of the intermediate-level posts. We’ll cover quite a lot of ground in this post becausethere are a lot of parts that all work together and therefore would be more confusing if presented in isolation(because you can’t easily spin it up and run it locally without all the parts).
SQLAlchemy is one of the most widely used and highest quality Python third-party libraries. It gives application developers easy ways to work with relational databases in their Pythoncode.
SQLAlchemy considers the database to be a relational algebra engine, not just a collection of tables. Rows can be selected from not only tables but also joins and other select statements; any of these units can be composed into a larger structure. SQLAlchemy’s expression language builds on this concept from its core.
SQLAlchemy is composed of two distinct components:
In this tutorial, we will make use of both components, though you can adapt the approach not to use the ORM.
So far in the tutorial, we have not been able to persist data beyond a server restart since allour POST operations just updated data structures in memory. Not we will change that by bringingin a relational database. We’ll useSQLite because it requiresminimal setup so it’s useful for learning. With very minor config modifications you can use the same approachfor other relational database management systems (RDBMS) such as PostgreSQL or MySQL.
In thetutorial repo openup the part-7 directory. You’ll notice that there are a number of new directories compared to previousparts of the tutorial:
.├── alembic ----> NEW│ ├── env.py│ ├── README│ ├── script.py.mako│ └── versions│ └── 238090727082_added_user_and_recipe_tables.py├── alembic.ini ----> NEW├── app│ ├── __init__.py│ ├── backend_pre_start.py ----> NEW│ ├── crud ----> NEW│ │ ├── __init__.py│ │ ├── base.py│ │ ├── crud_recipe.py│ │ └── crud_user.py│ ├── db ----> NEW│ │ ├── __init__.py│ │ ├── base.py│ │ ├── base_class.py│ │ ├── init_db.py│ │ └── session.py│ ├── deps.py ----> NEW│ ├── initial_data.py│ ├── main.py│ ├── models ----> NEW│ │ ├── __init__.py│ │ ├── recipe.py│ │ └── user.py│ ├── recipe_data.py│ ├── schemas│ │ ├── __init__.py│ │ ├── recipe.py│ │ └── user.py│ └── templates│ └── index.html├── poetry.lock├── prestart.sh├── pyproject.toml├── README.md└── run.shWe’ll go through all of these additions in this post, and by the end you’ll understand howall the new modules work together to enable not just a one-time database integration, but alsomigrations as we update our database schemas. More on that soon.
The overall diagram of what we’re working towards looks like this:

To start off, we will look at the ORM and Data Access Layers:
For now, let’s turn our attention to the newdb directory.
We want to define tables and columns from our Python classes using the ORM. In SQLAlchemy,this is enabled through adeclarative mapping.The most common pattern is constructing a base class using the SQLALchemydeclarative_base function, and then having all DB model classes inherit from this base class.
We create this base class in thedb/base_class.py module:
importtypingastfromsqlalchemy.ext.declarativeimportas_declarative,declared_attrclass_registry:t.Dict={}@as_declarative(class_registry=class_registry)classBase:id:t.Any__name__:str# Generate __tablename__ automatically@declared_attrdef__tablename__(cls)->str:returncls.__name__.lower()In other codebases/examples you may have seen this done like so:
Base=declarative_base()In our case, we’re doing the same thing but with a decorator (provided by SQLAlchemy) so that we can declare some helper methods on ourBase class - like automatically generating a__tablename__.
Having done that, we are now free to define the tables we need for our API. So far we’ve worked with some toy recipe data stored in memory:
RECIPES=[{"id":1,"label":"Chicken Vesuvio","source":"Serious Eats","url":"http://www.seriouseats.com/recipes/2011/12/chicken-vesuvio-recipe.html",},{"id":2,"label":"Chicken Paprikash","source":"No Recipes","url":"http://norecipes.com/recipe/chicken-paprikash/",},{"id":3,"label":"Cauliflower and Tofu Curry Recipe","source":"Serious Eats","url":"http://www.seriouseats.com/recipes/2011/02/cauliflower-and-tofu-curry-recipe.html",},]Therefore the first table we want to define is arecipe table that will store the data above. We define this table via the ORM inmodels/recipe.py:
fromsqlalchemyimportColumn,Integer,String,ForeignKeyfromsqlalchemy.ormimportrelationshipfromapp.db.base_classimportBaseclassRecipe(Base):# 1id=Column(Integer,primary_key=True,index=True)# 2label=Column(String(256),nullable=False)url=Column(String(256),index=True,nullable=True)source=Column(String(256),nullable=True)submitter_id=Column(String(10),ForeignKey("user.id"),nullable=True)# 3submitter=relationship("User",back_populates="recipes")# 4Let’s break this down:
recipe table with a Python class, which inherits from theBase class we defined earlier (this allows SQLAlchemy to detect and map the classto a database table).recipe table (e.g.id,label) is defined in the class, setting the column typewith SQLAlchemy types likeInteger andString.ForeignKey class.relationship() and connect the two using therelationship.back_populates parameterAs you can infer from the foreign key, we will also need to define auser table, since we wantto be able to attribute the recipes to users. A user table will set us up for doing auth in later partsof the tutorial.
Ouruser table is defined inmodels/user.py, and follows a similar structure:
classUser(Base):id=Column(Integer,primary_key=True,index=True)first_name=Column(String(256),nullable=True)surname=Column(String(256),nullable=True)email=Column(String,index=True,nullable=False)is_superuser=Column(Boolean,default=False)recipes=relationship("Recipe",cascade="all,delete-orphan",back_populates="submitter",uselist=True,)Great, we have defined our tables. Now what? Well, we haven’t yet told SQLAlchemy how to connect to the DB (e.g. what is the database called, how do we connect to it, what flavor of SQL is it). All this happens in theSQLALchemyEngine class.
We instantiate an engine in thedb/session.py module:
fromsqlalchemyimportcreate_enginefromsqlalchemy.ormimportsessionmaker,SessionSQLALCHEMY_DATABASE_URI="sqlite:///example.db"# 1engine=create_engine(# 2SQLALCHEMY_DATABASE_URI,# required for sqliteconnect_args={"check_same_thread":False},# 3)SessionLocal=sessionmaker(autocommit=False,autoflush=False,bind=engine)# 4Let’s break this code down:
SQLALCHEMY_DATABASE_URI defines the file where SQLite will persist data.create_engine function we instantiate our engine, passing in the DB connection URI -note that this connection string can be much more complex and include drivers, dialects, database server locations, users, passwords and ports. Here’s apostgres example.check_same_thread: False config is necessary to work with SQLite - this is a common gotchabecause FastAPI can access the database with multiple threads during a single request, so SQLite needs to be configured to allow that.Session, which (unlike the engine) is ORM-specific. When workingwith the ORM, the session object is our main access point to the database.Fromthe SQLAlchemy Session docs:
In the most general sense, the Session establishes all conversations with the database andrepresents a “holding zone” for all the objects which you’ve loaded or associated with it during its lifespan.
We are making progress! Next, we will once again turn to Pydanticwhich we looked at in part 4to make it very easy to get our Python code into the right shape for database operations.
Now let’s look at the FastAPI app endpoint logic, specifically the Pydnantic DB schemas and CRUD utilities:

For those already used to SQLAlchemy and other Python web frameworks like Django or Flask,this part will probably contain something a little different to what you might be used to. Let’s zoomin on the diagram:

The key sequencing to understand is that as REST API requests which will require interaction withthe database come in, the following occurs:
root function inmain py file)Don’t worry if this isn’t entirely clear yet, we’ll go through each step and by the end of this blogpost have brought everything together.
We will create Pydantic models for reading/writing data from our various API endpoints. Theterminology can get confusing because we have SQLAlchemy models which look like this:
name = Column(String)
and Pydantic models which look like this:
name: str
To help distinguish the two, we tend to keep the Pydantic classes in theschemas directory.
Let’s look at theschemas/recipe.py module:
frompydanticimportBaseModel,HttpUrlfromtypingimportSequenceclassRecipeBase(BaseModel):label:strsource:strurl:HttpUrlclassRecipeCreate(RecipeBase):label:strsource:strurl:HttpUrlsubmitter_id:intclassRecipeUpdate(RecipeBase):label:str# Properties shared by models stored in DBclassRecipeInDBBase(RecipeBase):id:intsubmitter_id:intclassConfig:orm_mode=True# Properties to return to clientclassRecipe(RecipeInDBBase):pass# Properties properties stored in DBclassRecipeInDB(RecipeInDBBase):passSome of these classes, likeRecipe andRecipeCreate existed in previous parts ofthe tutorial (in the oldschema.py module), others such as those classes referencingthe DB, are new.
Pydantic’sorm_mode (which you can see inRecipeInDBBase) will tell the Pydantic model to read the data even if it is not a dict, but an ORM model (or any other arbitrary object with attributes). Withoutorm_mode, if you returned a SQLAlchemy model from your path operation, it wouldn’t include the relationship data.
Why make the distinction between a
RecipeandRecipeInDB? This allows us in future to separatefields which are only relevant for the DB, or which we don’t want to return to the client (such asa password field).
As we saw in the diagram, it’s not enough to just have the Pydantic schemas. We also needsome reusable functions to interact with the database. This will be our data access layer,and by FastAPI convention, these utility classes are defined in thecrud directory.
These CRUD utility classes help us to do things like:
Each table gets its own CRUD class, which inherits reusable parts from a base class. Let’sexamine this now incrud/base.py.
fromtypingimportAny,Dict,Generic,List,Optional,Type,TypeVar,Unionfromfastapi.encodersimportjsonable_encoderfrompydanticimportBaseModelfromsqlalchemy.ormimportSessionfromapp.db.base_classimportBaseModelType=TypeVar("ModelType",bound=Base)CreateSchemaType=TypeVar("CreateSchemaType",bound=BaseModel)UpdateSchemaType=TypeVar("UpdateSchemaType",bound=BaseModel)classCRUDBase(Generic[ModelType,CreateSchemaType,UpdateSchemaType]):# 1def__init__(self,model:Type[ModelType]):# 2""" CRUD object with default methods to Create, Read, Update, Delete (CRUD). **Parameters** * `model`: A SQLAlchemy model class * `schema`: A Pydantic model (schema) class"""self.model=modeldefget(self,db:Session,id:Any)->Optional[ModelType]:returndb.query(self.model).filter(self.model.id==id).first()# 3defget_multi(self,db:Session,*,skip:int=0,limit:int=100)->List[ModelType]:returndb.query(self.model).offset(skip).limit(limit).all()# 4defcreate(self,db:Session,*,obj_in:CreateSchemaType)->ModelType:obj_in_data=jsonable_encoder(obj_in)db_obj=self.model(**obj_in_data)# type: ignoredb.add(db_obj)db.commit()# 5db.refresh(db_obj)returndb_obj# skipping rest...This is one of the trickier bits of code in this part of the tutorial, let’s break it down:
CRUDBase will be defined with aSQLAlchemy model as the first argument,then thePydantic model (aka schema) for creating and updating rows as the second and third arguments.db) .query methodto chain together different DB queries. These can be as simple or complex as we need. Here is a the SQLAlchemydocumentation on queries. In this examplewe filter by ID, allowing us to fetch a single row from the database..offset and.limit methods, and finishing with.all()commit method (see docs) to completethe row insertion. We’ll be looking at tradeoffs to having the commit call here vs. in the endpointlater in the tutorial series (in the Python 3 asyncio and performance blog post).Now that we’ve defined theCRUDBase we can use that to definecrud utilities for each table. The code for these subclasses is much simpler, with the majority of the logic inherited from the base class:
fromapp.crud.baseimportCRUDBasefromapp.models.recipeimportRecipefromapp.schemas.recipeimportRecipeCreate,RecipeUpdateclassCRUDRecipe(CRUDBase[Recipe,RecipeCreate,RecipeUpdate]):# 1...recipe=CRUDRecipe(Recipe)# 2Recipe model, followed by the PydanticRecipeCreate andRecipeUpdate schemas.CRUDRecipe classDon’t worry if this seems a bit abstract at the moment. In the last part of this postshow this being used in the endpoints so can see (and run locally) the API endpoints interacting with the DB, and how the Pydantic schemas and CRUD utilities will work together. However, before we get there, we need to handle the initial creation of the DB, as well asfuture migrations.

The goal of this tutorial is to build a production-ready API, and you simply can’t setupa database without considering how to make changes to your tables over time. A common solutionfor this challenge is thealembic library, which is designed to work with SQLAlchemy.
Recall our new part-7 file tree:
.├── alembic ----> NEW│ ├── env.py│ ├── README│ ├── script.py.mako│ └── versions│ └── 238090727082_added_user_and_recipe_tables.py├── alembic.ini ----> NEW├── app...etc.The alembic directory contains a few files we’ll be using:
env.py where we pass in configuration (such as our database connection string,config required to create a SQLAlchemy engine, and our SQLAlchemyBase declarative mappingclass)versions - A directory containing each migration to be run. Every file within thisdirectory represents a DB migration, and contains a reference to previous/next migrations sothey are always run in the correct sequence.script.py.mako - boilerplate generated by alembicREADME - boilerplate generated by alembicalembic.ini - Tells alembic where to look for the other files, as well as setting up configfor loggingIn order to trigger the alembic migrations, you run thealembic upgrade headcommand.When you make any change to a database table, you capture that change by runningalembic revision --autogenerate -m "Some description" - thiswill generate a new file in theversions directory which you should always check.
For our recipe API, we’ve wrapped this migration command in theprestart.sh bash script:
#! /usr/bin/env bash# Let the DB startpython ./app/backend_pre_start.py# Run migrationsalembic upgrade head <---- ALEMBIC MIGRATION COMMAND# Create initial data in DBpython ./app/initial_data.pyRunning the alembic migrations will not only apply changes to the database, but alsocreate the tables and columns in the first place. This is why you don’t find any tablecreation command likeBase.metadata.create_all(bind=engine) which you’ll often findin tutorials that don’t cover migrations.
You’ll notice that we also have a couple of other scripts in ourprestart.sh script:
backend_pre_start.py which simply executes a SQLSELECT 1 query to check our DBis workinginitial_data.py - which uses theinit_db function fromdb/init_db.py which we will breakdown further nowdb/init_db.py
fromappimportcrud,schemasfromapp.dbimportbase# noqa: F401fromapp.recipe_dataimportRECIPESlogger=logging.getLogger(__name__)FIRST_SUPERUSER="[email protected]"definit_db(db:Session)->None:# 1ifFIRST_SUPERUSER:user=crud.user.get_by_email(db,email=FIRST_SUPERUSER)# 2ifnotuser:user_in=schemas.UserCreate(full_name="Initial Super User",email=FIRST_SUPERUSER,is_superuser=True,)user=crud.user.create(db,obj_in=user_in)else:logger.warning("Skipping creating superuser. User with email"f"{FIRST_SUPERUSER} already exists.")ifnotuser.recipes:forrecipeinRECIPES:recipe_in=schemas.RecipeCreate(label=recipe["label"],source=recipe["source"],url=recipe["url"],submitter_id=user.id,)crud.recipe.create(db,obj_in=recipe_in)# 3init_db function takes as its only argument a SQLAlchemySession object,which we can import from ourdb/session.py.crud utility functions that we created earlier in thispart of the tutorial to fetch or create a user. We need a user so that we have canassign asubmitter to the initial recipes (recall the foreign key lookup fromthe recipe table to the user table).app/recipe_data.pyRECIPE list ofdictionaries, use that data to create a series of PydanticRecipeCreate schemas,which we can then pass to thecrud.recipe.create function toINSERTrows into the database.Give this a try in your cloned copy:
pip install poetrypoetry install to install the dependenciespoetry run ./prestart.shIn the terminal, you should see migrations being applied. You should also see a newfile created:part-7-database/example.db. This is the SQLite DB, check it by running:
sqlite3 example.db.tablesYou should see 3 tables:alembic_version,recipe, anduser. Check the initialrecipe data has been created with a simple SQL query:SELECT * FROM recipe;
You should see 3 recipe rows in the sqlite DB:

You can exit the SQLite3 shell with the command.exit
Great! All that remains now is to bring everything together in our API endpoints.

If you look atapp/main.py you’ll find all the endpoint functions have been updatedto take an additionaldb argument:
fromfastapiimportRequest,Depends# skipping...@api_router.get("/",status_code=200)defroot(request:Request,db:Session=Depends(deps.get_db),)->dict:""" Root GET"""recipes=crud.recipe.get_multi(db=db,limit=10)returnTEMPLATES.TemplateResponse("index.html",{"request":request,"recipes":recipes},)# skippping...This is a first look at FastAPI’s powerfuldependency injection capabilities, which for my money is one of the frameworks best features. Dependency Injection (DI) is a way foryour functions and classes to declare things they need to work (in a FastAPI context, usually our endpoint functions which are calledpath operation functions).
We’ll be exploring dependency injection in much more detail later in the tutorial. For now, what you need to appreciate is that the FastAPIDepends class is used in our function parameters like so:
db: Session = Depends(deps.get_db)
And what we pass intoDepends is a function specifying the dependency. In this part of the tutorial,we’ve added these functions in thedeps.py module:
fromtypingimportGeneratorfromapp.db.sessionimportSessionLocal# 1defget_db()->Generator:db=SessionLocal()# 2try:yielddb# 3finally:db.close()# 4Quick breakdown:
SessionLocal fromapp/db/session.pyyield the session, which returns a generator. Why do this? Well, theyield statement suspends the function’s execution and sends a value back to the caller, but retains enough state to enable the function to resume where it is left off. In short, it’s an efficientway to work with our database connection.Python generators primer for those unfamiliar.finally clause of thetry block - meaningthat the DB session is alwaysclosed.This releases connection objects associated with the session and leaves the session ready to be usedagain.OK, now understand how our DB session is being made available in our various endpoints. Let’s lookat a more complex example:
@api_router.get("/recipe/{recipe_id}",status_code=200,response_model=Recipe)# 1deffetch_recipe(*,recipe_id:int,db:Session=Depends(deps.get_db),)->Any:""" Fetch a single recipe by ID"""result=crud.recipe.get(db=db,id=recipe_id)# 2ifnotresult:# the exception is raised, not returned - you will get a validation# error otherwise.raiseHTTPException(status_code=404,detail=f"Recipe with ID{recipe_id} not found")returnresultNotice the following changes:
response_model=Recipe now refers to our updated PydanticRecipe model, meaning thatit will work with our ORM calls.crud utility function to get a recipe by id, passing in thedb session objectwe specified as a dependency to the endpoint.The extracrud utilities took a bit more time to understand - but can you see how now we havean elegant separation of concerns - no need for any DB queries in the endpoint code, it’s allhandled within our CRUD utility functions.
We see a similar pattern in the other endpoints, swapping outcrud.recipe.get forcrud.recipe.get_multi when we’re returning multiple recipes andcrud.recipe.create whenwe create new recipes in the POST endpoint.
Let’s give it a try!
From the top (you can skip theprestart.sh command if you’ve already run it)
pip install poetrycd into the part-7 directory thenpoetry installpoetry run ./prestart.sh (only required once)poetry run ./run.shhttp://localhost:8001/docsGo ahead and create a new recipe using the interactive docstry me button, you’llneed to set thesubmitter_id to 1 to match the initial user we created viaprestart.sh:

gotcha: theurl field must be a valid URL with anhttp/https
Scroll down the responses and you should see the HTTP 201 status code JSON response body:

And now for the moment of truth. Stop the server (CTRL-C). Then restart it withpoetry run ./run.sh
Navigate to the home page:http://localhost:8001
You should see the recipe you just created persisted in the list of recipes:

The database is now hooked up to our API!
Phew! That was a long one. In the next part of the tutorial, we’ll take stock and update our appstructure to be really production-ready as well as adding proper config and API versioning. We’re well on our way now.