CYBERTEC PostgreSQL Logo

Data Normalization in PostgreSQL

01.2022 / Category: / Tags: | | |

Normalization by Osmosis
Part 1: A Guide to Data Normalization in PostgreSQL 

Introduction to data normalization in PostgreSQL

Although I installed my first PostgreSQL database on RedHat Linux (not RHEL) 6.0 around 1999, I probably limited my interaction with this 6.5 release to starting the server with SysinitV scripts. PostgreSQL was at that time an underdog to MySQL (or so I thought) so it was cool. At that time I knew nothing about SQL (except the name), databases or data modelling.

Not long after that, I found myself a student of a big technical university - and to my surprise, I was hired by the university IT centre to help with the deployment of a brand-new student records management system being developed internally. Suddenly, without any magical improvement in my knowledge and skills, I became an Oracle DBA. You’ll learn on the job, they said. Indeed, on the job I learned that Big Red is a thing (or rather — a beast).

I served my days as best as I could, sitting between two huge ERD diagrams (A0 size) I printed out and meticulously glued together from smaller sheets. I still was not much smarter about data modelling, but since the system was developed by the university’s most experienced academic staff, it was supposed to be designed correctly. And correctly designed it was.

Without any formal training and proper tooling…

(no money for anything more that 70 or so named user licenses — Win SQLPlus was all I had) I was trying to find my way through the maze of tables, views and foreign keys. I found myself writing ad-hoc reports and queries, correcting operators’ mistakes, and so on. Over the years, as if by some magic influence, the ancient knowledge of how to properly decompose and normalize data diffused into my mind, my thinking and my reflexes from those two huge diagrams, as if by osmosis.

Since then, whenever I see a database schema, I can immediately say whether or not it feels right. Much later on, I learned the reasons behind those feelings.

And yes, the database was designed in (mostly) Codd’s 3rd normal form.

So… what am I talking about?

The concept of…

normal forms

…is older than SQL, or practical relational database software, for that matter. It was introduced by F. Edgar Codd @ IBM in his seminal work in 1970. That work laid the foundation for all relational databases we use today, including PostgreSQL.

Unfortunately, despite the fact that these concepts have very practical applications, they are somewhat academic in nature and origin; in order to talk about them, I need to introduce some theory and define some terms.

Note: The following few paragraphs are not specific to PostgreSQL, so you can apply them to any relational database. Also, if you already know all this like the back of your hand, feel free to skip ahead as much as is comfortable for you.

The important thing to remember is: The objective of introducing the concept of normal forms was to avoid… 

data anomalies

…by means of the elimination of all possible data duplication. That’s what it is really all about:

  • avoid ambiguity (update anomaly)
  • avoid loss of information (delete anomaly)
  • avoid incompleteness (insert anomaly)

How do anomalies look in practice? Let’s imagine we are back in the early 1970’s, and our thriving paperback romance bookstore uses an application that keeps all data in one set of records (e.g., in an ISAM file):

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99
Angie NadajoskovaBratislava, SKIrritated soul234$23.99
Rommy la DequellAltdorf, ATDesire to suffer456$1.99

And we need to add another new author we signed up, but do not yet have details about the new thriller-romance he is writing:

Michael BirgBludenz, AT  $4.99

So we can see the following anomalies:

First, our best selling author of two books (record 1 & 3) has an inconsistency: Is Altdorf in Hungary or in Austria? Or has the author moved? Second, when the book “Irritated Soul” ran out of print, while removing it from the record we also lost all the information about the author - that we might very well want to keep (record 2). And finally, the new record cannot be added — it just fails to deliver the essential information. We cannot use it as it is.

These problems might seem naive, or even ridiculous today, now that we are all accustomed to the fantastic capabilities of PostgreSQL and other RDBMSes. But remember, this is 50 years ago when all joins were programmed by hand, and more importantly, remember that those anomalies can still appear today, although in a less obvious form.

As I do not expect anyone to have the same knowledge diffusion experience I have described above, there are a few… 

basic concepts 

…and terms I need to clarify first:

We have a record (also called a tuple or row) consisting of named and typed fields (also called attributes or columns). A collection of identically typed rows is a table (also known as a relation). Fields in a record or records in a table are not guaranteed to have any particular order. So far so good; all users of any relational database should already be familiar with these terms.

Now about…

keys

A trivial way to identify any row in a table is to create a key that is a combination of all fields: this is the trivial superkey (values which are part of a key are underlined below)

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99

It is usually possible to create superkeys that can still identify rows, but do not contain all fields. 

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99

or

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99

If a column belongs to a superkey,

...it is a prime attribute (or key attributes). If it does not, it is a non-prime attribute (non-key attribute). This distinction is important in describing what can depend on what. E,g, in the examples above, the numbers of pages or the prices may be the same for other books; there is no functional dependency.

If it is no longer possible to remove a field from a superkey without losing its ability to identify a row, we have come to a candidate key — there may be more than one such combination of fields. 

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99

or

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99

To reliably access any particular row and enforce uniqueness, one of the candidate keys must be selected as the primary key — it consists of a designated combination of one or more columns (when there are multiple columns it is a compound key), and their combined contents must be unique and non empty

AuthorOriginTitlePagesPrice
Rommy la DequellAltdorf, HUPower of silence123$0.99

Keys of the record can be natural — when they are derived from natural, inherent attributes of a real live thing described by the record — or surrogate. A natural key comes straight out of one or more than one existing columns; an example would be a social security number. A surrogate key has its own, separate column and is artificially created due to the need to uniquely identify each record. Surrogate keys have their benefits: they can be easily generated, cheaply stored and sorted (bigint). If necessary, they can have guaranteed global uniqueness (UUID). This type of key is often automatically used when a schema is autogenerated by some persistence framework or ORM.

However, surrogate keys do have one big disadvantage:

they do not say anything about the actual properties of whatever real-world object the row describes. As a consequence, they cannot tell us anything about it; in particular, they can’t identify its uniqueness. As such, the only uniqueness a surrogate key can guarantee is their own one. And that means that in practice, duplicates can occur; the very thing normalization is trying to avoid.

On the other hand, since in the real world, objects are identified rather by a combination of features and not by a single one, natural primary keys are usually compound — that makes them rather inconvenient to use in practice. Also, repeating them in other tables as foreign keys leads to data repetition  in the form of multiple rows and tables containing the same (possibly long) values over and over again which is just an inefficient use of space.

Since we haven't even started normalizing, and yet the use of a surrogate key in the role of primary key is common practice, here is an example:

Let’s create a table describing people, and let’s include a sufficient variety of attributes to capture their uniqueness: name, surname,  place and date of birth, height, eye-colour, net worth, favourite dish, pet name and so on. Let’s decide on the combination of these attributes that will for our use case uniquely identify each person. Now, that exact combination will be our compound primary key — so that the database can assure only one record with exactly these values of selected attributes could exist in a table.

However, if we just decide to give such a person a number and make it a primary key, nothing will prevent us from having multiple identical records — for all intents and purposes, describing exactly the same person — differing just by the meaningless value of that one artificially assigned number (row 1 & 4) or worse, containing small differences caused by typos or mistakes  (row 3). Note however, that a typo made in row 2 would not have been prevented anyway, since it happened in a column which was originally part of a primary key.

IdAuthorOriginTitlePagesPrice
1Rommy la DequellAltdorf, HUPower of silence123$0.99
2Rommy la DequellAltdorf, ATPower of silence123$0.99
3Rommy la DequelAltdorf, HUPower of silence123$0.99
4Rommy la DequellAltdorf, HUPower of silence123$0.99

Note: To cope with the issue of assigning uniqueness and yet still help the database do its job in an efficient manner, the primary key should always be natural, compound if necessary to assure uniqueness. To conveniently refer to the table rows, however, create one additional surrogate key and make it unique. Then we will separate the concern of assuring row uniqueness (server just in one place by primary key) from the concern of providing a way to conveniently refer to that row from other tables (server by column with just unique constraint). That will suffice. See the CYBERTEC blog on unique constraints for more info.

The last concept I need to introduce is the functional dependency between attributes. An attribute is functionally dependent on another attribute if its value always appears with the particular value of an attribute it is dependent on. This dependency is unidirectional. You can imagine it as a function or as a mapping table, unequivocally  transforming one value into another.

Trivial example would be to have two columns expressing some physical property (e.g. weight), one metric, the other – imperial. The value of one is functionally dependent on the other, since the conversion formula is well known.

Less trivial example would be to assign a less precise property (e.g. category) based on some more specific property (e.g. price or number of pages). There is somewhere (usually in application) a precise formula to derive first from the second. If it is applied consistently and without exceptions — we have a functional dependency. That dependency can be encoded in an additional table in relation form and then calculated upon each query or in a view. Note however, that if there is a business intention to have exceptions (that is to assign category in an arbitrary way — even if only incidentally) there is no strict functional dependency and it needs to be reflected in schema design.

Back to  normal forms —some are mostly…

relevant

…to daily practice, but with the constant evolution of database software capabilities  (especially PostgreSQL) the division “relevant/ not relevant” should be taken with a grain of salt.

Each normal form has a precise requirement that must be satisfied, and builds upon the requirement expressed in the previous form, so let’s talk about them in order:

  • 0NF — aka NNF (Not Normalized Form) requires no additional assurances except that a record has a primary key that uniquely identifies it.
  • 1NF — requires that no field has a table as its value. The original definition was even more strict: that a field value is atomic (that is — not compound). As it is with current RDBMSes, and especially PostgreSQL this presents a rather challenging requirement; let’s stick with the «no table as value» form and discuss what atomic, compound and opaque mean later.
  • non-compliant
PersonBorn inBorn onMother’s nameBorn in LandMother born inMother born onLived in
Anna GruberKlagenfurt12.10.1967Angela BauerKärntenSalzburg30.02.1948
Wien
Linz
Betty HuberBregenz14.05.1989Regina WagnerVorarlbergGraz22.11.1970
Linz
Salzburg
  • compliant — composite field values (lived in) are moved to a separate table
PersonBorn inBorn onBorn in LandMother’s nameMother born inMother born on
Anna GruberKlagenfurt12.10.1967KärntenAngela BauerSalzburg30.02.1948
Betty HuberBregenz14.05.1989VorarlbergRegina WagnerGraz22.11.1970
PersonBorn inBorn onLived in
Anna GruberKlagenfurt12.10.1967Wien
Anna GruberKlagenfurt12.10.1967Linz
Betty HuberBregenz14.05.1989Linz
Betty HuberBregenz14.05.1989Salzburg
  • compliant with unique surrogate key (Person ID)
PersonBorn inBorn onPerson IDBorn in LandMother’s nameMother born inMother born on
Anna GruberKlagenfurt12.10.19671KärntenAngela BauerSalzburg30.02.1948
Betty HuberBregenz14.05.19892VorarlbergRegina WagnerGraz22.11.1970
Person IDLived in
1Wien
1Linz
2Linz
2Salzburg
  • 2NF — requires that (for a compound primary key) no non-key attribute depends just on part of the primary key. In other words, non-key attributes must depend on the whole primary key. If such a partial dependency is identified, the table must be split, and both dependent attribute(s) and part of the primary key it depends upon moved to a separate table.
  • non-compliant
PersonBorn inBorn onPerson IDBorn in LandMother’s nameMother born inMother born on
Anna GruberKlagenfurt12.10.19671KärntenAngela BauerSalzburg30.02.1948
Betty HuberBregenz14.05.19892VorarlbergRegina WagnerGraz22.11.1970
  • compliant — fields (born in land) dependent on part of the primary key (born in) are moved to a separate table
PersonBorn inBorn onPerson IDMother’s nameMother born inMother born on
Anna GruberKlagenfurt12.10.19671Angela BauerSalzburg30.02.1948
Betty HuberBregenz14.05.19892Regina WagnerGraz22.11.1970

This assumes that for the cities table below,  the city name and only the city name is a primary key (which usually is not the case in the real world).

CityLand
KlagenfurtKärnten
BregenzVorarlberg
  • 3NF — requires that there is no transitive dependency (through another field depending directly on the primary key) on the primary key. This means that if the values of two or more fields consistently appear together, they should be moved to a separate table, and one of them should be made a key, so that information repetition is avoided.
  • non-compliant
PersonBorn inBorn onPerson IDMother’s nameMother born inMother born on
Anna GruberKlagenfurt12.10.19671Angela BauerSalzburg30.02.1948
Betty HuberBregenz14.05.19892Regina WagnerGraz22.11.1970
  • compliant — fields (mother born in & mother born on) transitively dependent (through mother’s name) on the primary key (person, born in & born on) are moved to separate rows. This is possible because “mother” is also a “person”.  Should this be a different object type, it will be moved to a separate table.
PersonBorn inBorn onPerson IDMother ID
Anna GruberKlagenfurt12.10.196713
Betty HuberBregenz14.05.198924
Angela BauerSalzburg30.02.19483«null»
Regina WagnerGraz22.11.19704«null»
  • BCNF — Boyce-Codd Normal Form aka 3.5NF is just a bit of a stronger version of 3NF; often, a table in 3NF is also compliant with BCNF.
    It requires that no part of a primary key is functionally dependent on a non-key attribute. The difference is quite subtle — till now I have been talking about non-key attributes which are dependent on key attributes. Now it is the opposite way round.
  • non-compliant — Born in is part of the primary key and functionally depends on non-key attribute Birth Post Code
PersonBorn inBorn onPerson IDMother IDBirth Post Code
Anna GruberKlagenfurt12.10.1967139073
Betty HuberBregenz14.05.1989246900
Angela BauerSalzburg30.02.19483«null»5020
Regina WagnerGraz22.11.19704«null»8032
  • compliant — attribute "Birth Post Code" replaces "Born in" in the primary key. Also, the structure of the Cities table is changed (and is much more realistic). Note that with this change, the  data are correctly normalized, but less readable.
PersonBirth Post CodeBorn onPerson IDMother ID
Anna Gruber907312.10.196713
Betty Huber690014.05.198924
Angela Bauer502030.02.19483«null»
Regina Wagner803222.11.19704«null»
Post CodeCityLand
9073KlagenfurtKärnten
6900BregenzVorarlberg
5020SalzburgSalzburg
8032GrazSteier­mark

This is usually where information about normal forms ends, particularly because the remaining normal forms are the ones any practitioner would usually not be able to quote from memory, and are generally considered…

not-so relevant

…for practical applications. This however is not quite true as some are currently — even non-deliberately — applied in day-to-day design practice and it is a good thing.
These forms were introduced after the original Edgar F. Codd work. Tables violating them actually can happen, and should be taken into account. I have given you examples below:

  • 4NF — requires that there is no multivalued dependency on non-super key attributes
  • non-compliant — this table has only one key (no non-primary attributes), but assuming that each trainer is able to deliver all his courses in every language he can speak, adding a new course would require inserting multiple rows. Note however, that if this assumption is not true (e.g. some trainers do deliver only specific courses in particular languages)  we cannot stop here and must go one step further to 5NF.
TrainerCourseLanguage
MichalPostgreSQL ProfessionalEN
MichalPostgreSQL ProfessionalPL
PavloPostgreSQL ProfessionalEN
PavloPostgreSQL ProfessionalSK
MichalAdministration & Performance TuningEN
MichalAdministration & Performance TuningPL
HansAdministration & Performance TuningEN
HansAdministration & Performance TuningDE
HansPostgreSQL ProfessionalEN
HansPostgreSQL ProfessionalDE
  • compliant — table is split in two
TrainerCourse
MichalPostgreSQL Professional
MichalAdministration & Performance Tuning
PavloPostgreSQL Professional
HansPostgreSQL Professional
HansAdministration & Performance Tuning
TrainerLanguage
MichalEN
MichalPL
PavloEN
PavloSK
HansEN
HansDE
  • 5NF — requires that there are no joint dependencies that do not have only superkey components. In other words a table is in 5NF if it cannot be split (reduced) further into smaller tables that have different (smaller) key without losing information.
  • non-compliant — let’s use the table from the example above, but insert additional data: Pavlo started to offer an additional course, but does not have all materials in all languages yet
TrainerCourseLanguage
MichalPostgreSQL ProfessionalEN
MichalPostgreSQL ProfessionalPL
PavloPostgreSQL ProfessionalEN
PavloPostgreSQL ProfessionalSK
PavloAdministration & Performance TuningEN
MichalAdministration & Performance TuningEN
MichalAdministration & Performance TuningPL
HansAdministration & Performance TuningEN
HansAdministration & Performance TuningDE
HansPostgreSQL ProfessionalEN
HansPostgreSQL ProfessionalDE
  • compliant — one additional table must be introduced.
TrainerCourse
MichalPostgreSQL Professional
MichalAdministration & Performance Tuning
PavloPostgreSQL Professional
HansPostgreSQL Professional
HansAdministration & Performance Tuning
TrainerLanguage
MichalEN
MichalPL
PavloEN
PavloSK
HansEN
HansDE
CourseLanguage
PostgreSQL ProfessionalEN
PostgreSQL ProfessionalDE
PostgreSQL ProfessionalPL
PostgreSQL ProfessionalSK
Administration & Performance TuningEN
Administration & Performance TuningDE
Administration & Performance TuningPL
  • DKNF — requires that there are no constraints other than those expressed by domain or key constraints. In other words, constraints must be precisely defined, there is no place for an arbitrary or fuzzy choice for constrained values.
  • non-compliant — the table does not contain any precise indication about how to select values for the columns Length and Pace
CourseHoursDaysLengthPace
PostgreSQL Professional153normalstandard
Administration & Performance Tuning255longstandard
High Availability & Patroni243normalintense
  • compliant — the values for Length and Pace are precisely constrained, note that in order to determine the pace of the course, an additional calculation must be performed
CourseHoursDays
PostgreSQL Professional153
Administration & Performance Tuning255
High Availability & Patroni243
Lengthmin daysmax days
short11
normal23
long45
Pacemin hours per daymax hours per day
introductory34
standard55
intense68
  • 6NF — requires that a row must not contain more than one non-primary attribute in addition to a primary key. As impractical as this normal form might seem, it has its applications in e.g. data warehouses and very sparse schemas, but it is most efficient in so-called columnar storage engines. For typical OLTP, 6NF is not very practical and is too close for comfort to the dreaded EAV antipattern.

Now that you are familiar with data normalization in PostgreSQL and normal forms, keep alert for my next blog on the practical usage of these forms. With a little practice, you'll soon have an intutitve feeling for which form belongs to what situation. Or, you can try printing out this blog and posting it on the wall behind your computer, if you too want to try out "Normalization by Osmosis!"

To continue reading, see Part 2 of this series, Practical Examples of Data Normalization.

You may also be interested in this CYBERTEC blog on auto-generating primary keys.

One response to “Data Normalization in PostgreSQL”

  1. One of the excellent post I read anytime recently. I liked the way normalization is explained with the help of anomalies. Awesome job. This article is worth a book.

Leave a Reply

Your email address will not be published. Required fields are marked *

CYBERTEC Logo white
CYBERTEC PostgreSQL International GmbH
Römerstraße 19
2752 Wöllersdorf
Austria

+43 (0) 2622 93022-0
office@cybertec.at

Get the newest PostgreSQL Info & Tools


    This site is protected by reCAPTCHA and the Google Privacy Policy & Terms of Service apply.

    ©
    2024
    CYBERTEC PostgreSQL International GmbH
    phone-handsetmagnifiercrosscross-circle
    linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram