OpenScience in Python

Now that you know what computational notebooks are and why we should care about them, let’s start using them! This section introduces you to using Python for manipulating tabular data. Please read through it carefully and pay attention to how ideas about manipulating data are translated into code. For this part, you can read directly from the course website, although it is recommended you follow the section interactively by running the code on your own.

Once you have read through, jump on the Do-It-Yourself section, which will provide you with a challenge that you should complete on your own, and will allow you to put what you have already learnt into practice.

Data wrangling

Real world datasets tend to be messy. There is no way around it: datasets have “holes” (missing data), the amount of formats in which data can be stored is endless, and the best structure to share data is not always the optimum to analyze them, hence the need to wrangle (manipulate, transform and structure) them. As has been correctly pointed out in many outlets (e.g.), much of the time spent in what is called (Geo-)Data Science is related not only to sophisticated modeling and insight, but to more basic and less exotic tasks such as obtaining data, processing, turning them into a shape that makes analysis possible, and exploring it to get to know their basic properties.

In this session, you will use a few real world datasets and learn how to process them in Python so they can be transformed and manipulated, if necessary, and analyzed. For this, we will introduce some of the fundamental tools of data analysis and scientific computing. We use a prepared dataset that saves us much of the more intricate processing that goes beyond the introductory level the session is aimed at.

In this notebook, we discuss several patterns to clean and structure data properly, including tidying, subsetting, and aggregating; and we finish with some basic visualization. An additional extension presents more advanced tricks to manipulate tabular data.

Before we get our hands data-dirty, let us import all the additional libraries we will need to run the code:

Loading packages

We will start by loading core packages for working with geographic vector and attribute data.

# This ensures visualizations are plotted inside the notebook

import os              # This provides several system utilities
import pandas as pd    # This is the workhorse of data munging in Python

Datasets

We will be exploring some demographic characteristics in Liverpool. To do that, we will use a dataset that contains population counts, split by ethnic origin. These counts are aggregated at the Lower Layer Super Output Area (LSOA from now on). LSOAs are an official Census geography defined by the Office of National Statistics. You can think of them, more or less, as neighbourhoods. Many data products (Census, deprivation indices, etc.) use LSOAs as one of their main geographies.

To do this, we will download a data folder from github called census2021_ethn. You should place this in a data folder you will use throughout the course.

Import housesales data from csv

census2021 = pd.read_csv("data/census2021_ethn/liv_pop.csv", index_col='GeographyCode')

Let us stop for a minute to learn how we have read the file. Here are the main aspects to keep in mind:

  • We are using the method read_csv from the pandas library, which we have imported with the alias pd.

  • Here the csv is based on a data file but it could also be a web address or sometimes you find data in packages.

  • The argument index_col is not strictly necessary but allows us to choose one of the columns as the index of the table. More on indices below.

  • We are using read_csv because the file we want to read is in the csv format. However, pandas allows for many more formats to be read and write. A full list of formats supported may be found here.

  • To ensure we can access the data we have read, we store it in an object that we call census2021. We will see more on what we can do with it below but, for now, just keep in mind that allows us to save the result of read_csv.

Important

You need to store the data file on your computer, and read it locally. To do that, you can follow these steps:

  1. Download the census2021_ethn file by right-clicking on this link and saving the file

  2. Place the file in a data folder you have created where you intend to read it.

  3. Your folder should have the following structure a. a gds folder (where you will save your quarto .qmd documents) b. a data folder c. the census2021_ethn folder inside your data folder.

Download a folder on github
  1. First go to https://download-directory.github.io/

  2. Then go to the folder you need to today. So for example copy: https://github.com/pietrostefani/gds/tree/main/data/London

  3. Paste it in the green box… give it a few minutes

  4. Check your downloads file and unzip

Data, sliced and diced

Now we are ready to start playing with and interrogating the dataset! What we have at our fingertips is a table that summarizes, for each of the LSOAs in Liverpool, how many people live in each, by the region of the world where they were born. We call these tables DataFrame objects, and they have a lot of functionality built-in to explore and manipulate the data they contain.

Structure

Let’s start by exploring the structure of a DataFrame. We can print it by simply typing its name:

census2021
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania
GeographyCode
E01006512 910 106 840 24 0
E01006513 2225 61 595 53 7
E01006514 1786 63 193 61 5
E01006515 974 29 185 18 2
E01006518 1531 69 73 19 4
... ... ... ... ... ...
E01033764 2106 32 49 15 0
E01033765 1277 21 33 17 3
E01033766 1028 12 20 8 7
E01033767 1003 29 29 5 1
E01033768 1016 69 111 21 6

298 rows × 5 columns

Since they represent a table of data, DataFrame objects have two dimensions: rows and columns. Each of these is automatically assigned a name in what we will call its index. When printing, the index of each dimension is rendered in bold, as opposed to the standard rendering for the content. In the example above, we can see how the column index is automatically picked up from the .csv file’s column names. For rows, we have specified when reading the file we wanted the column GeographyCode, so that is used. If we hadn’t specified any, pandas will automatically generate a sequence starting in 0 and going all the way to the number of rows minus one. This is the standard structure of a DataFrame object, so we will come to it over and over. Importantly, even when we move to spatial data, our datasets will have a similar structure.

One further feature of these tables is that they can hold columns with different types of data. In our example, this is not used as we have counts (or int, for integer, types) for each column. But it is useful to keep in mind we can combine this with columns that hold other type of data such as categories, text (str, for string), dates or, as we will see later in the course, geographic features.

Inspecting

We can check the top (bottom) X lines of the table by passing X to the method head (tail). For example, for the top/bottom five lines:

census2021.head() # read first 5 rows
census2021.tail() # read last 5 rows
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania
GeographyCode
E01033764 2106 32 49 15 0
E01033765 1277 21 33 17 3
E01033766 1028 12 20 8 7
E01033767 1003 29 29 5 1
E01033768 1016 69 111 21 6

Summarise

We can get an overview of the values of the table:

census2021.describe()
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania
count 298.00000 298.000000 298.000000 298.000000 298.000000
mean 1462.38255 29.818792 62.909396 8.087248 1.949664
std 248.67329 51.606065 102.519614 9.397638 2.168216
min 731.00000 0.000000 1.000000 0.000000 0.000000
25% 1331.25000 7.000000 16.000000 2.000000 0.000000
50% 1446.00000 14.000000 33.500000 5.000000 1.000000
75% 1579.75000 30.000000 62.750000 10.000000 3.000000
max 2551.00000 484.000000 840.000000 61.000000 11.000000

Note how the output is also a DataFrame object, so you can do with it the same things you would with the original table (e.g. writing it to a file).

In this case, the summary might be better presented if the table is “transposed”:

census2021.describe().T
count mean std min 25% 50% 75% max
Europe 298.0 1462.382550 248.673290 731.0 1331.25 1446.0 1579.75 2551.0
Africa 298.0 29.818792 51.606065 0.0 7.00 14.0 30.00 484.0
Middle East and Asia 298.0 62.909396 102.519614 1.0 16.00 33.5 62.75 840.0
The Americas and the Caribbean 298.0 8.087248 9.397638 0.0 2.00 5.0 10.00 61.0
Antarctica and Oceania 298.0 1.949664 2.168216 0.0 0.00 1.0 3.00 11.0

Columns

Create new columns

We can generate new variables by applying operations on existing ones. For example, we can calculate the total population by area. Here is a couple of ways to do it:

Longer, hardcoded:

total = census2021['Europe'] + census2021['Africa'] + census2021['Middle East and Asia'] + census2021['The Americas and the Caribbean'] + census2021['Antarctica and Oceania']
# Print the top of the variable
total.head()
GeographyCode
E01006512    1880
E01006513    2941
E01006514    2108
E01006515    1208
E01006518    1696
dtype: int64

One shot:

census2021['Total_Population'] = census2021.sum(axis=1)
# Print the top of the variable
census2021.head()
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01006512 910 106 840 24 0 1880
E01006513 2225 61 595 53 7 2941
E01006514 1786 63 193 61 5 2108
E01006515 974 29 185 18 2 1208
E01006518 1531 69 73 19 4 1696

Note that we are summing over “axis=1”. In a DataFrame object, “axis 0” and “axis 1” represent the rows and columns respectively.

A different spin on this is assigning new values: we can generate new variables with scalars, and modify those:

# New variable with all ones
census2021['ones'] = 1
census2021.head()
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population ones
GeographyCode
E01006512 910 106 840 24 0 1880 1
E01006513 2225 61 595 53 7 2941 1
E01006514 1786 63 193 61 5 2108 1
E01006515 974 29 185 18 2 1208 1
E01006518 1531 69 73 19 4 1696 1

Delete columns

Permanently deleting variables is also within reach of one command:

del census2021['ones']
census2021.head()
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01006512 910 106 840 24 0 1880
E01006513 2225 61 595 53 7 2941
E01006514 1786 63 193 61 5 2108
E01006515 974 29 185 18 2 1208
E01006518 1531 69 73 19 4 1696

Queries

Index-based queries

Here we explore how we can subset parts of a DataFrame if we know exactly which bits we want. For example, if we want to extract the total and European population of the first four areas in the table:

We use loc with lists:

eu_tot_first4 = census2021.loc[['E01006512', 'E01006513', 'E01006514', 'E01006515'], ['Total_Population', 'Europe']]

eu_tot_first4
Total_Population Europe
GeographyCode
E01006512 1880 910
E01006513 2941 2225
E01006514 2108 1786
E01006515 1208 974

Condition-based queries

However, sometimes, we do not know exactly which observations we want, but we do know what conditions they need to satisfy (e.g. areas with more than 2,000 inhabitants). For these cases, DataFrames support selection based on conditions. Let us see a few examples. Suppose we want to select…

Areas with more than 900 people in Total:

pop900 = census2021.loc[census2021['Total_Population'] > 900, :]
pop900
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01006512 910 106 840 24 0 1880
E01006513 2225 61 595 53 7 2941
E01006514 1786 63 193 61 5 2108
E01006515 974 29 185 18 2 1208
E01006518 1531 69 73 19 4 1696
... ... ... ... ... ... ...
E01033764 2106 32 49 15 0 2202
E01033765 1277 21 33 17 3 1351
E01033766 1028 12 20 8 7 1075
E01033767 1003 29 29 5 1 1067
E01033768 1016 69 111 21 6 1223

298 rows × 6 columns

Areas where there are no more than 750 Europeans:

euro750 = census2021.loc[census2021['Europe'] < 750, :]
euro750
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01033757 731 39 223 29 3 1025

Areas with exactly ten person from Antarctica and Oceania:

oneOA = census2021.loc[census2021['Antarctica and Oceania'] == 10, :]
oneOA
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01006679 1353 484 354 31 10 2232

Pro-tip: These queries can grow in sophistication with almost no limits.

Combining queries

Now all of these queries can be combined with each other, for further flexibility. For example, imagine we want areas with more than 25 people from the Americas and Caribbean, but less than 1,500 in total:

ac25_l500 = census2021.loc[(census2021['The Americas and the Caribbean'] > 25) &                    (census2021['Total_Population'] < 1500), :]
ac25_l500
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01033750 1235 53 129 26 5 1448
E01033752 1024 19 114 33 6 1196
E01033754 1262 37 112 32 9 1452
E01033756 886 31 221 42 5 1185
E01033757 731 39 223 29 3 1025
E01033761 1138 52 138 33 11 1372

Sorting

Among the many operations DataFrame objects support, one of the most useful ones is to sort a table based on a given column. For example, imagine we want to sort the table by total population:

db_pop_sorted = census2021.sort_values('Total_Population', ascending=False)
db_pop_sorted.head()
Europe Africa Middle East and Asia The Americas and the Caribbean Antarctica and Oceania Total_Population
GeographyCode
E01006747 2551 163 812 24 2 3552
E01006513 2225 61 595 53 7 2941
E01006751 1843 139 568 21 1 2572
E01006524 2235 36 125 24 11 2431
E01006787 2187 53 75 13 2 2330

Additional resources