Add deploy assets and update telemetry datasets

Prepare deployment package and clean telemetry/lab data: add deploy/ (README, datasaurus.csv, datasets and lab01 notebooks), add new lab02 dataset notebook variants (lab02_task1_datasets_v2/ v2b) and solutions for task3, and update multiple lab02 telemetry and git-activity notebooks. Clean and normalize claude/dataset_A_indie_game_telemetry_clean.csv (fill/standardize timestamps, session lengths and other fields) to improve consistency for downstream analysis.
This commit is contained in:
2026-02-24 10:07:31 +00:00
parent fa9898b321
commit d689ada45e
17 changed files with 46042 additions and 9782 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -4,10 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Lab 02 · Task 1 Exploratory Data Analysis with Pandas & Seaborn\n",
"# Lab 01<br>Task 1: Exploratory Data Analysis with Pandas & Seaborn\n",
"\n",
"**Estimated time:** ~30 minutes \n",
"**Dataset:** `datasaurus_dozen.csv`\n",
"This task serves two purposes. It introduces you to some of the basic tools to start understanding datasets and shows you why descriptive statistics may not be enough to understand the nature of a dataset.\n",
"\n",
"Additionally, this simple first task also serves the purpose of getting you acquainted with Jupyter notebooks.\n",
"\n",
"**Dataset:** `datasaurus.csv`\n",
"\n",
"---\n",
"\n",
@@ -23,9 +26,9 @@
"\n",
"### Context\n",
"\n",
"The **Datasaurus Dozen** is a collection of 13 small datasets deliberately constructed to share *identical* summary statistics while looking completely different when plotted. It was created by Matejka & Fitzmaurice (2017) to demonstrate a modern version of Anscombe's Quartet.\n",
"The **Datasaurus Dozen** is a collection of 13 small datasets created by Matejka & Fitzmaurice (2017) to demonstrate a modern version of Anscombe's Quartet.\n",
"\n",
"This task will take you through the same journey a data analyst faces: you will start with raw numbers, run the usual summaries, and then discover through visualisation that numbers alone were hiding the story.\n",
"This task will take you through the same journey a data analyst faces: you will start with raw numbers, run the usual summaries, and then discover, through visualisation, that numbers alone were hiding the story.\n",
"\n",
"---"
]
@@ -34,7 +37,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 Load and Inspect the Data\n",
"## Part 1: Load and Inspect the Data\n",
"\n",
"Start by importing the libraries you need and loading the dataset."
]
@@ -181,7 +184,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.1 Structure and data types\n",
"### 1.1. Structure and data types\n",
"\n",
"Before computing anything, always understand what you are working with."
]
@@ -255,7 +258,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.2 Overall summary statistics\n",
"### 1.2. Overall summary statistics\n",
"\n",
"Use `describe()` to get a global numerical summary of `x` and `y`."
]
@@ -363,7 +366,7 @@
"source": [
"---\n",
"\n",
"## Part 2 Grouped Statistics: The Reveal\n",
"## Part 2: Grouped Statistics\n",
"\n",
"The dataset column holds 13 different named groups. Let's compute summary statistics **per group** and see if the groups differ."
]
@@ -577,7 +580,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\sss\\AppData\\Local\\Temp\\ipykernel_95640\\2163207487.py:2: FutureWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.\n",
"C:\\Users\\sss\\AppData\\Local\\Temp\\ipykernel_64804\\2163207487.py:2: FutureWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning.\n",
" correlation = df.groupby('dataset').apply(lambda g: g['x'].corr(g['y'])).round(2)\n"
]
}
@@ -593,10 +596,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Question:** Look at the table above. Are the 13 datasets statistically different from each other? \n",
"> **Question:** Look at the table above. Are the 13 datasets statistically different from each other? \n",
"> Write your answer in the cell below before moving on.\n",
"\n",
"*(Double-click this cell to write your answer here)*\n",
"\n",
"---"
]
@@ -605,11 +607,19 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 3 Now Let's Actually Look at the Data\n",
"<!-- ## Part 3: Now Let us Actually Look at the Data\n",
"\n",
"We will focus on three sub-datasets: **`dino`**, **`star`**, and **`bullseye`**. These three were chosen because they produce a dramatic visual contrast despite their identical statistics.\n",
"\n",
"Later, feel free to explore the remaining 10 groups."
"Later, feel free to explore the remaining 10 groups. -->"
]
},
{
"cell_type": "markdown",
"id": "d6f82ff1",
"metadata": {},
"source": [
"## Part 3: Visualizing the Data"
]
},
{
@@ -739,10 +749,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Question:** What would a data analyst have concluded if they had only looked at the summary statistics table? \n",
"> **Question:** What would a data analyst have concluded if they had only looked at the summary statistics table? \n",
"> What does this tell you about when and why visualisation is necessary?\n",
"\n",
"*(Double-click to write your answer here)*\n",
"\n",
"---"
]
@@ -789,7 +798,7 @@
"source": [
"---\n",
"\n",
"## ✏️ Your Turn — Free Exploration\n",
"## Your Turn — Free Exploration\n",
"\n",
"The cells below are yours. Here are some things to try:\n",
"\n",
@@ -801,15 +810,6 @@
"> **Key question to keep in mind:** For each plot type you try — does it reveal the structural difference between the datasets, or does it hide it?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Your exploration here\n"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -0,0 +1,424 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d44c354e",
"metadata": {},
"source": [
"# Lab 01<br>Task 1: Exploratory Data Analysis with Pandas & Seaborn\n",
"\n",
"This task serves two purposes. It introduces you to some of the basic tools to start understanding datasets and shows you why descriptive statistics may not be enough to understand the nature of a dataset.\n",
"\n",
"Additionally, this simple first task also serves the purpose of getting you acquainted with Jupyter notebooks.\n",
"\n",
"**Dataset:** `datasaurus.csv`\n",
"\n",
"---\n",
"\n",
"### Objectives\n",
"\n",
"By the end of this task you will be able to:\n",
"- Use `pandas` to inspect a dataset's structure, types, and summary statistics\n",
"- Apply grouped aggregations to compare subsets of data\n",
"- Use `seaborn` to produce scatter plots that reveal structure invisible to statistics\n",
"- Articulate *why* visualisation is an essential — not optional — step in data analysis\n",
"\n",
"---\n",
"\n",
"### Context\n",
"\n",
"The **Datasaurus Dozen** is a collection of 13 small datasets created by Matejka & Fitzmaurice (2017) to demonstrate a modern version of Anscombe's Quartet.\n",
"\n",
"This task will take you through the same journey a data analyst faces: you will start with raw numbers, run the usual summaries, and then discover, through visualisation, that numbers alone were hiding the story.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "350a4fd8",
"metadata": {},
"source": [
"## Part 1: Load and Inspect the Data\n",
"\n",
"Start by importing the libraries you need and loading the dataset."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ed1a7a01",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import seaborn as sns\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# Configure plot style\n",
"sns.set_theme(style='whitegrid', palette='tab10')\n",
"plt.rcParams['figure.dpi'] = 100"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9cf77ef2",
"metadata": {},
"outputs": [],
"source": [
"# Load the dataset\n",
"df = pd.read_csv('datasaurus.csv')\n",
"\n",
"# Preview the first rows\n",
"df.head(10)"
]
},
{
"cell_type": "markdown",
"id": "a2e51209",
"metadata": {},
"source": [
"### 1.1. Structure and data types\n",
"\n",
"Before computing anything, always understand what you are working with."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6a45f4e3",
"metadata": {},
"outputs": [],
"source": [
"# Shape of the dataset (rows, columns)\n",
"print('Shape:', df.shape)\n",
"\n",
"# Column names and data types\n",
"print('\\nDtypes:')\n",
"print(df.dtypes)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d01329b3",
"metadata": {},
"outputs": [],
"source": [
"# How many unique sub-datasets are there, and how many rows does each contain?\n",
"print('Unique datasets:', df['dataset'].nunique())\n",
"print('\\nRows per dataset:')\n",
"print(df['dataset'].value_counts())"
]
},
{
"cell_type": "markdown",
"id": "1545a53f",
"metadata": {},
"source": [
"### 1.2. Overall summary statistics\n",
"\n",
"Use `describe()` to get a global numerical summary of `x` and `y`."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a92b670e",
"metadata": {},
"outputs": [],
"source": [
"# Summary statistics for the entire dataset\n",
"df[['x', 'y']].describe().round(2)"
]
},
{
"cell_type": "markdown",
"id": "16b1a9e3",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 2: Grouped Statistics: The Reveal\n",
"\n",
"The dataset column holds 13 different named groups. Let's compute summary statistics **per group** and see if the groups differ."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e7693c95",
"metadata": {},
"outputs": [],
"source": [
"# Compute mean and standard deviation of x and y for each sub-dataset\n",
"grouped_stats = (\n",
" df.groupby('dataset')[['x', 'y']]\n",
" .agg(['mean', 'std'])\n",
" .round(2)\n",
")\n",
"\n",
"grouped_stats"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "837a2552",
"metadata": {},
"outputs": [],
"source": [
"# Also compute the Pearson correlation between x and y per group\n",
"correlation = df.groupby('dataset').apply(lambda g: g['x'].corr(g['y'])).round(2)\n",
"correlation.name = 'corr(x,y)'\n",
"print(correlation)"
]
},
{
"cell_type": "markdown",
"id": "c40be027",
"metadata": {},
"source": [
"> **Question:** Look at the table above. Are the 13 datasets statistically different from each other? \n",
"> Write your answer in the cell below before moving on.\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cc4c40dd",
"metadata": {},
"source": [
"<!-- ## Part 3: Now Let us Actually Look at the Data\n",
"\n",
"We will focus on three sub-datasets: **`dino`**, **`star`**, and **`bullseye`**. These three were chosen because they produce a dramatic visual contrast despite their identical statistics.\n",
"\n",
"Later, feel free to explore the remaining 10 groups. -->"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d4fde0b1",
"metadata": {},
"outputs": [],
"source": [
"# Filter to the three focus datasets\n",
"focus = ['dino', 'star', 'bullseye']\n",
"df_focus = df[df['dataset'].isin(focus)].copy()\n",
"\n",
"print(f'Rows in subset: {len(df_focus)}')"
]
},
{
"cell_type": "markdown",
"id": "86d8b1b6",
"metadata": {},
"source": [
"### 3.1 — Individual scatter plots"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c2f4c527",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\n",
"\n",
"colors = sns.color_palette('tab10', 3)\n",
"\n",
"for ax, name, color in zip(axes, focus, colors):\n",
" subset = df_focus[df_focus['dataset'] == name]\n",
" ax.scatter(subset['x'], subset['y'], color=color, alpha=0.7, s=40, edgecolors='white', linewidths=0.4)\n",
" ax.set_title(name, fontsize=14, fontweight='bold')\n",
" ax.set_xlabel('x')\n",
" ax.set_ylabel('y')\n",
"\n",
"fig.suptitle('Same statistics, completely different data', fontsize=16, fontweight='bold', y=1.02)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "538ecb6f",
"metadata": {},
"source": [
"### 3.2 — Side-by-side with statistics overlay\n",
"\n",
"Let's add the mean and standard deviation annotations to make the point explicit."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d677b3ec",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(1, 3, figsize=(15, 5.5), sharey=True)\n",
"\n",
"for ax, name, color in zip(axes, focus, colors):\n",
" subset = df_focus[df_focus['dataset'] == name]\n",
" \n",
" ax.scatter(subset['x'], subset['y'], color=color, alpha=0.65, s=40,\n",
" edgecolors='white', linewidths=0.4, label='observations')\n",
" \n",
" # Mean crosshair\n",
" mx, my = subset['x'].mean(), subset['y'].mean()\n",
" ax.axvline(mx, color='black', linestyle='--', linewidth=1.0, alpha=0.6)\n",
" ax.axhline(my, color='black', linestyle='--', linewidth=1.0, alpha=0.6)\n",
" ax.scatter([mx], [my], color='black', s=80, zorder=5, label=f'mean ({mx:.1f}, {my:.1f})')\n",
" \n",
" # Stats box\n",
" stats_text = (\n",
" f\"mean x = {subset['x'].mean():.2f}\\n\"\n",
" f\"mean y = {subset['y'].mean():.2f}\\n\"\n",
" f\"sd x = {subset['x'].std():.2f}\\n\"\n",
" f\"sd y = {subset['y'].std():.2f}\\n\"\n",
" f\"corr = {subset['x'].corr(subset['y']):.2f}\"\n",
" )\n",
" ax.text(0.03, 0.97, stats_text, transform=ax.transAxes,\n",
" fontsize=8.5, verticalalignment='top', fontfamily='monospace',\n",
" bbox=dict(boxstyle='round,pad=0.4', facecolor='white', alpha=0.85, edgecolor='grey'))\n",
" \n",
" ax.set_title(name, fontsize=14, fontweight='bold')\n",
" ax.set_xlabel('x')\n",
" ax.set_ylabel('y')\n",
"\n",
"fig.suptitle('Datasaurus Dozen — statistics are identical, shapes are not',\n",
" fontsize=14, fontweight='bold', y=1.01)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "e295910e",
"metadata": {},
"source": [
"> **❓ Question:** What would a data analyst have concluded if they had only looked at the summary statistics table? \n",
"> What does this tell you about when and why visualisation is necessary?\n",
"\n",
"*(Double-click to write your answer here)*\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "86dea1fb",
"metadata": {},
"source": [
"## Part 4 — Small Multiples: All 13 Datasets at Once\n",
"\n",
"Seaborn's `FacetGrid` makes it easy to produce a *small multiples* plot — the same chart type repeated for each group. This is a powerful pattern for comparing distributions across many categories."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d7eb9f5a",
"metadata": {},
"outputs": [],
"source": [
"g = sns.FacetGrid(df, col='dataset', col_wrap=5, height=3, aspect=1.0,\n",
" sharex=False, sharey=False)\n",
"g.map(sns.scatterplot, 'x', 'y', alpha=0.6, s=18, color='steelblue', edgecolor='white', linewidth=0.2)\n",
"g.set_titles(col_template='{col_name}', size=10)\n",
"g.figure.suptitle('All 13 Datasaurus Dozen datasets — identical statistics',\n",
" fontsize=13, fontweight='bold', y=1.01)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "becc716d",
"metadata": {},
"source": [
"---\n",
"\n",
"## ✏️ Your Turn — Free Exploration\n",
"\n",
"The cells below are yours. Here are some things to try:\n",
"\n",
"- **Histograms:** Use `sns.histplot()` to plot the distribution of `x` or `y` for two contrasting datasets. Do the distributions look different?\n",
"- **KDE plots:** Try `sns.kdeplot(data=df_focus, x='x', hue='dataset')` to overlay density curves for the three focus groups.\n",
"- **Pair plots:** Use `sns.pairplot(df_focus, hue='dataset')` — what does it add?\n",
"- **Box plots:** Use `sns.boxplot(data=df, x='dataset', y='x')` — can boxplots reveal the structural differences?\n",
"\n",
"> **Key question to keep in mind:** For each plot type you try — does it reveal the structural difference between the datasets, or does it hide it?"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83a2bc01",
"metadata": {},
"outputs": [],
"source": [
"# Your exploration here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7aac288",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "3cc44f9f",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "3c09cd29",
"metadata": {},
"source": [
"---\n",
"\n",
"## 🔑 Key Takeaways\n",
"\n",
"- Summary statistics (mean, SD, correlation) can be completely identical across datasets with totally different structure\n",
"- Visualisation is not a finishing step — it is a **diagnostic step** that must happen early\n",
"- Different chart types reveal different aspects: scatterplots show point-level structure, histograms show marginal distributions, box plots summarise spread but can hide shape\n",
"- The small multiples pattern (FacetGrid) is a powerful way to compare many groups at a glance\n",
"\n",
"→ In **Task 2**, you will move to a real-world dataset with real problems — and discover that the \"hard work\" you just did manually can be partially automated."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -563,7 +563,17 @@
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2026-02-23 18:02:42,900 - INFO - Executing shutdown due to inactivity...\n",
"2026-02-23 18:02:42,946 - INFO - Executing shutdown...\n",
"2026-02-23 18:02:42,962 - INFO - Not running with the Werkzeug Server, exiting by searching gc for BaseWSGIServer\n"
]
}
],
"source": [
"# Shut down the previous D-Tale instance and reload with the clean data\n",
"d.kill()\n",

View File

@@ -584,7 +584,17 @@
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2026-02-23 18:31:02,737 - INFO - Executing shutdown due to inactivity...\n",
"2026-02-23 18:31:02,790 - INFO - Executing shutdown...\n",
"2026-02-23 18:31:02,795 - INFO - Not running with the Werkzeug Server, exiting by searching gc for BaseWSGIServer\n"
]
}
],
"source": [
"# OPTIONAL: Two-pass strategy — try a second format for the rows that failed\n",
"# If you determine the ambiguous rows use DD/MM/YYYY, try dayfirst=True on them only\n",

View File

@@ -1,12 +1,32 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "e28cb3de",
"metadata": {},
"outputs": [],
"source": [
"# 43679 -- Interactive Visualization\n",
"# 2025 - 2026\n",
"# 2nd semester\n",
"# Lab 1 - EDA (guided)\n",
"# ver 1.2\n",
"# 24022026 - Cosmetics; added rationale for task in scope of course"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Lab 02 · Task 2 Guided EDA and Data Cleaning\n",
"# Lab 02<br>Task 2: Guided EDA and Data Cleaning\n",
"\n",
"The purpose of this task you to introduce you to the basic steps of performing data preparation for a dataset with several illustrative quality issues. In most situations you already have the basic code to be run; in others, you need to infer from existing code to complete the step. What is important here is for you to be able to identify the issues, understand the tools and approaches that may help tackling them, and acquire a systematic way of thinking about data preparation.\n",
"\n",
"**Don't just run the code. Understand why it is needed and what it is doing**\n",
"\n",
"**NOTE**: For those cells asking questions or with tables that can be filled, you can just double-click the cell and edit it with your answers and rationale\n",
"\n",
"**Estimated time:** ~50 minutes \n",
"**Dataset:** `dataset_A_indie_game_telemetry.csv`\n",
"\n",
"---\n",
@@ -23,9 +43,9 @@
"\n",
"| Tool | Role |\n",
"|---|---|\n",
"| **SweetViz** | Automated profiling generate a report, triage what needs fixing |\n",
"| **D-Tale** | Interactive navigation browse rows, inspect value counts, confirm fixes visually |\n",
"| **pandas** | All actual cleaning every transformation is explicit, reproducible code |\n",
"| **SweetViz** | Automated profiling: generate a report, triage what needs fixing |\n",
"| **D-Tale** | Interactive navigation: browse rows, inspect value counts, confirm fixes visually |\n",
"| **pandas** | All actual cleaning: every transformation is explicit, reproducible code |\n",
"\n",
"---"
]
@@ -82,7 +102,7 @@
"\n",
"---\n",
"\n",
"## Part 2 Automated Profiling with SweetViz\n",
"## Part 2: Automated Profiling with SweetViz\n",
"\n",
"SweetViz generates a visual report for the entire dataset in one call. Think of it as a **triage tool** — it shows you *where* to look; the actual investigation and fixing happens afterwards."
]
@@ -113,11 +133,11 @@
"| How many distinct values does `region` have? Does that seem right? | *...* |\n",
"| What is unusual about `purchase_amount`? | *...* |\n",
"\n",
"*(Double-click to fill in your answers)*\n",
"\n",
"\n",
"---\n",
"\n",
"## Part 3 Navigate and Inspect with D-Tale\n",
"## Part 3: Navigate and Inspect with D-Tale\n",
"\n",
"Before writing any cleaning code, use D-Tale to browse the raw data and *see* the problems with your own eyes. You will not clean anything here — D-Tale is your inspection tool.\n",
"\n",
@@ -161,7 +181,7 @@
"\n",
"---\n",
"\n",
"## Part 4 Clean with Pandas\n",
"## Part 4: Clean with Pandas\n",
"\n",
"We will work through seven issue categories. Each section follows the same pattern:\n",
"1. **Inspect** — confirm the problem in code\n",
@@ -187,7 +207,7 @@
"source": [
"---\n",
"\n",
"### 4.1 Boolean columns: inconsistent encoding\n",
"### 4.1. Boolean columns: inconsistent encoding\n",
"\n",
"Three columns (`crash_flag`, `is_featured_event`, `is_long_session`) each have **8 different representations** of the same two values: `True`, `False`, `true`, `false`, `1`, `0`, `Yes`, `No`.\n",
"\n",
@@ -242,7 +262,7 @@
"source": [
"---\n",
"\n",
"### 4.2 Categorical columns: case and whitespace inconsistency\n",
"### 4.2. Categorical columns: case and whitespace inconsistency\n",
"\n",
"Four columns have values that are logically identical but differ in case or surrounding whitespace:\n",
"- `region` — 32 variants of 5 values (e.g. `us-west`, `US-WEST`, `Us-west`, `' us-west '`)\n",
@@ -319,7 +339,7 @@
"source": [
"---\n",
"\n",
"### 4.3 `purchase_amount`: comma as decimal separator\n",
"### 4.3. `purchase_amount`: comma as decimal separator\n",
"\n",
"About 12% of rows use a comma instead of a decimal point (`1,80` instead of `1.80`). This prevented pandas from reading the column as numeric, so it was loaded as `object`.\n",
"\n",
@@ -364,7 +384,7 @@
"source": [
"---\n",
"\n",
"### 4.4 Missing values: decisions and strategy\n",
"### 4.4. Missing values: decisions and strategy\n",
"\n",
"Not all missing values are the same. Before deciding what to do, you need to understand *why* the value is missing — the reason determines the correct action.\n",
"\n",
@@ -378,7 +398,7 @@
"\n",
"<br>\n",
"\n",
"> **⚠️ Context always matters.** There is no universal rule for missing values. The decisions above are reasonable for this dataset and analytical goal but a different context might lead to different choices.\n"
"> **⚠️ Context always matters.** There is no universal rule for missing values. The decisions above are reasonable for this dataset and analytical goal, but a different context might lead to different choices.\n"
]
},
{
@@ -417,7 +437,7 @@
"source": [
"---\n",
"\n",
"### 4.5 Outliers: `avg_fps`\n",
"### 4.5. Outliers: `avg_fps`\n",
"\n",
"The `avg_fps` column has a maximum of 10,000 fps — physically impossible for a game running in real time. The 75th percentile is ~82 fps, confirming that 10,000 is a logging error, not an extreme but plausible value.\n",
"\n",
@@ -458,7 +478,7 @@
"source": [
"---\n",
"\n",
"### 4.6 Datetime columns: mixed formats\n",
"### 4.6. Datetime columns: mixed formats\n",
"\n",
"The `start_time` and `end_time` columns contain timestamps in at least four different formats:\n",
"\n",
@@ -687,7 +707,7 @@
"\n",
"---\n",
"\n",
"## Part 5 Verify with D-Tale\n",
"## Part 5: Verify with D-Tale\n",
"\n",
"Reload the cleaned dataframe into D-Tale and visually confirm the fixes. This is a quick sanity check — you are looking for anything that looks wrong before committing to the cleaned dataset."
]
@@ -718,7 +738,9 @@
"| `purchase_amount` | Describe → dtype and range | float64, no commas |\n",
"| `avg_fps` | Describe → max | Below 300 |\n",
"| `session_length_s` | Describe → min and max | No negatives, no values > 28800 |\n",
"| `start_time` | Describe → dtype | datetime64 |\n"
"| `start_time` | Describe → dtype | datetime64 |\n",
"\n",
"## Part 6: Compare initial and clean datasets with SweetViz"
]
},
{
@@ -728,7 +750,9 @@
"metadata": {},
"outputs": [],
"source": [
"# Debug\n",
"# Debug code; sometimes, sweetviz is not able to compare columns due to data type changes that are incompatible\n",
"# This code just goes around column by column to identify any column that gives an error. Otherwise, SweetViz\n",
"# just crashes without any major explanation\n",
"\n",
"# Test comparison column by column\n",
"# for col in df_clean.columns:\n",
@@ -773,7 +797,7 @@
"\n",
"---\n",
"\n",
"## Part 7 Save the Cleaned Dataset"
"## Part 7: Save the Cleaned Dataset"
]
},
{
@@ -814,10 +838,14 @@
"| Wrong decimal separator | `.str.replace(',', '.')` + `.astype(float)` |\n",
"| Structural missing values | `dropna(subset=[...])` with explicit rationale |\n",
"| Outliers | Boolean mask + `.loc[mask, col] = NaN` |\n",
"| Mixed datetime formats | `pd.to_datetime(utc=True, errors='coerce')` |\n",
"\n",
"→ In **Task 3**, you will apply these skills independently to a new dataset — with a checklist but without step-by-step guidance."
"| Mixed datetime formats | `pd.to_datetime(utc=True, errors='coerce')` |\n"
]
},
{
"cell_type": "markdown",
"id": "572f9d85",
"metadata": {},
"source": []
}
],
"metadata": {

View File

@@ -1,12 +1,28 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "92169b19",
"metadata": {},
"outputs": [],
"source": [
"# 43679 -- Interactive Visualization\n",
"# 2025 - 2026\n",
"# 2nd semester\n",
"# Lab 1 - EDA (independent)\n",
"# ver 1.1\n",
"# 24022026 - Added questions at end; cleaning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Lab 02 · Task 3 Independent EDA and Cleaning\n",
"## Lab 01<br>Task 3: Independent EDA and Cleaning\n",
"\n",
"The purpose of this task is for you to practice EDA for a new dataset in a more independent manner. Feel free to go back to Task 2's code and reuse it, whenever it makes sense. Nevertheless, **don't limit yourself to just copy-pasting** and undersstand why you are applying each step. Understanding what are the issues and how to address them will be important for your final project.\n",
"\n",
"**Estimated time:** ~20 minutes \n",
"**Dataset:** `dataset_D_git_classroom_activity.csv`\n",
"\n",
"---\n",
@@ -34,12 +50,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 Load and Inspect"
"## Part 1: Load and Inspect"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -70,7 +86,7 @@
"\n",
"---\n",
"\n",
"## Part 2 Automated Profiling with SweetViz\n",
"## Part 2: Automated Profiling with SweetViz\n",
"\n",
"Generate a SweetViz report on the raw dataset. Use it to fill in the triage checklist below before moving on."
]
@@ -105,7 +121,7 @@
"\n",
"---\n",
"\n",
"## Part 3 Navigate and Inspect with D-Tale\n",
"## Part 3: Navigate and Inspect with D-Tale\n",
"\n",
"Launch D-Tale and use it to confirm each issue visually. Do not clean anything here."
]
@@ -149,9 +165,9 @@
"\n",
"---\n",
"\n",
"## Part 4 Clean with Pandas\n",
"## Part 4: Clean with Pandas\n",
"\n",
"Work through each issue below. For each one: inspect fix verify. \n",
"Work through each issue below. For each one: **inspect --> fix --> verify**. \n",
"The first example in each category is more detailed; subsequent columns follow the same pattern.\n",
"\n",
"Start by creating a working copy:"
@@ -172,7 +188,7 @@
"source": [
"---\n",
"\n",
"### 4.1 Boolean columns\n",
"### 4.1. Boolean columns\n",
"\n",
"**Columns:** `is_weekend`, `label_is_high_quality`, `exam_period` \n",
"**Issue:** 8 different representations of True/False \n",
@@ -218,7 +234,7 @@
"source": [
"---\n",
"\n",
"### 4.2 `is_bot_user`: case and whitespace\n",
"### 4.2. `is_bot_user`: case and whitespace\n",
"\n",
"**Issue:** 6 variants of 2 values (`Human`, `Bot`) with mixed case and whitespace \n",
"**Approach:** `.str.strip().str.lower()` — no typos, no synonym merging needed"
@@ -260,7 +276,7 @@
"source": [
"---\n",
"\n",
"### 4.3 Categorical columns: case and whitespace\n",
"### 4.3. Categorical columns: case and whitespace\n",
"\n",
"**Columns:** `dominant_language`, `editor`, `os`, `event_type` \n",
"**Issue:** Many case/whitespace variants — strip and lowercase resolves most \n",
@@ -313,7 +329,7 @@
"source": [
"---\n",
"\n",
"### 4.4 `ci_status`: case, whitespace, and synonym merging\n",
"### 4.4. `ci_status`: case, whitespace, and synonym merging\n",
"\n",
"**Issue:** Case and whitespace variants — but also `FAILED` and `FAILURE` represent the same outcome and need to be merged into one canonical value. \n",
"**Approach:** Strip and lowercase first, then use `.replace()` to merge synonyms.\n",
@@ -338,6 +354,7 @@
"outputs": [],
"source": [
"# Fix ci_status — strip, lowercase, then merge synonyms\n",
"# You can use .replace({'current':'replaced'})\n",
"# Your code here\n"
]
},
@@ -355,13 +372,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Your decision:** Which canonical form did you choose for `failed`/`failure`, and why?\n",
"> **Your decision:** Which canonical form did you choose for `failed`/`failure`, and why? This is where you need to go for the domain context. What is the common term?\n",
"\n",
"*(Double-click to write your answer)*\n",
"\n",
"---\n",
"\n",
"### 4.5 `coverage_percent`: comma decimal separator and type conversion\n",
"### 4.5. `coverage_percent`: comma decimal separator and type conversion\n",
"\n",
"**Issue:** Loaded as `object` — some values use a comma instead of a decimal point \n",
"**Approach:** Same as `purchase_amount` in Task 2 — `.str.replace()` then `.astype(float)`"
@@ -376,7 +393,10 @@
"# Inspect — how many rows have a comma?\n",
"print(df_clean['coverage_percent'].dtype)\n",
"comma_rows = df_clean['coverage_percent'].astype(str).str.contains(',', na=False)\n",
"print(f'Rows with comma: {comma_rows.sum()}')"
"print(f'Rows with comma: {comma_rows.sum()}')\n",
"\n",
"# tip: any values outside the valid range? \n",
"# What is the valid range for this variable?"
]
},
{
@@ -396,8 +416,11 @@
"outputs": [],
"source": [
"# Verify\n",
"\n",
"print(f'dtype: {df_clean[\"coverage_percent\"].dtype}')\n",
"print(df_clean['coverage_percent'].describe().round(2))"
"print(df_clean['coverage_percent'].describe().round(2))\n",
"print(f'\\nValues < 0: {(df_clean[\"coverage_percent\"] < 0).sum()} rows')\n",
"print(f'Values > 100: {(df_clean[\"coverage_percent\"] > 100).sum()} rows')"
]
},
{
@@ -406,7 +429,7 @@
"source": [
"---\n",
"\n",
"### 4.6 Missing values: decisions and strategy\n",
"### 4.6. Missing values: decisions and strategy\n",
"\n",
"This dataset has four columns with missing values. Inspect each one and decide what to do.\n",
"\n",
@@ -460,7 +483,7 @@
"source": [
"---\n",
"\n",
"### 4.7 Outliers and impossible values\n",
"### 4.7. Outliers and impossible values\n",
"\n",
"Three issues to address:\n",
"\n",
@@ -555,7 +578,7 @@
"\n",
"---\n",
"\n",
"### 4.8 `timestamp`: mixed datetime formats *(optional)*\n",
"### 4.8. **OPTIONAL** `timestamp`: mixed datetime formats \n",
"\n",
"Like Task 2, the `timestamp` column contains mixed datetime formats. However, unlike Task 2, there is no derived column that depends on it — so the impact of unresolved timestamps is lower here.\n",
"\n",
@@ -578,7 +601,7 @@
"source": [
"---\n",
"\n",
"## Part 5 Verify with D-Tale"
"## Part 5: Verify with D-Tale"
]
},
{
@@ -610,7 +633,7 @@
"\n",
"---\n",
"\n",
"## Part 6 Before vs After with SweetViz"
"## Part 6: Before vs After with SweetViz"
]
},
{
@@ -631,7 +654,7 @@
"source": [
"---\n",
"\n",
"## Part 7 Save"
"## Part 7: Save"
]
},
{
@@ -650,7 +673,7 @@
"source": [
"---\n",
"\n",
"## Reflection\n",
"## Final Questions\n",
"\n",
"Answer the following before finishing:\n",
"\n",
@@ -658,23 +681,29 @@
"\n",
"**2.** You found rows where `tests_failed > tests_run`. What does this kind of cross-column check tell you that a single-column inspection would have missed?\n",
"\n",
"**3.** For `ci_status`, you had to decide whether `failed` and `failure` are the same thing. What kind of knowledge beyond the data itself did you need to make that decision?\n",
"**3.** For `ci_status`, you had to decide whether `failed` and `failure` are the same thing. What kind of knowledge -- beyond the data itself -- did you need to make that decision?\n",
"\n",
"**4.** Compare this dataset to the telemetry dataset from Task 2. Which issues were the same? Which were new? What does that tell you about the generality of the cleaning skills you are building?\n",
"\n",
"*(Double-click to write your answers)*"
"**4.** Compare this dataset to the telemetry dataset from Task 2. Which issues were the same? Which were new? What does that tell you about the generality of the cleaning skills you are building?\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.10.0"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,673 @@
{
"nbformat": 4,
"nbformat_minor": 5,
"metadata": {
"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"},
"language_info": {"name": "python", "version": "3.10.0"}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Lab 02 · Task 3 — Independent EDA and Cleaning · SOLUTIONS\n",
"\n",
"**Dataset:** `dataset_D_git_classroom_activity.csv`\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 — Load and Inspect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import sweetviz as sv\n",
"import dtale\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv('dataset_D_git_classroom_activity.csv')\n",
"\n",
"print(f'Shape: {df.shape}')\n",
"print('\\nColumn types:')\n",
"print(df.dtypes)\n",
"df.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **What to note:**\n",
"> - `coverage_percent` should be numeric but is `object` — formatting problem in raw values\n",
"> - `is_weekend`, `label_is_high_quality`, `exam_period` should be boolean but are `object`\n",
"> - `commit_message_length` is `float64` rather than `int` — a sign that missing values forced a float type\n",
"\n",
"---\n",
"\n",
"## Part 2 — Automated Profiling with SweetViz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"report = sv.analyze(df)\n",
"report.show_html('sweetviz_git_raw.html', open_browser=False)\n",
"print('Report saved. Open sweetviz_git_raw.html in your browser.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Triage checklist — answers\n",
"\n",
"| Question | Finding |\n",
"|---|---|\n",
"| Which columns have missing values? Which has the most? | `pr_merge_time_hours` (71.7%), `commit_message_length` (7%), `build_duration_s` (2.1%), `time_to_ci_minutes` (2%) |\n",
"| Which columns should be boolean? | `is_weekend`, `label_is_high_quality`, `exam_period` |\n",
"| Which columns should be numeric? | `coverage_percent` — shown as TEXT due to comma decimal separators |\n",
"| `event_type` distinct count | ~42 — should be 7; case/whitespace variants |\n",
"| What is unusual about `ci_status`? | Besides case/whitespace variants, `FAILED` and `FAILURE` are synonyms that need merging |\n",
"| Suspicious numeric ranges | `lines_added` max 5000, `time_to_ci_minutes` max 1578, `pr_merge_time_hours` has negative values |\n",
"\n",
"---\n",
"\n",
"## Part 3 — Navigate and Inspect with D-Tale"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"d = dtale.show(df, host='127.0.0.1', subprocess=False, open_browser=False)\n",
"print('D-Tale running at:', d._url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 4 — Clean with Pandas"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean = df.copy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.1 — Boolean columns"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(sorted(df_clean['is_weekend'].dropna().unique().tolist()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Boolean keys included so the mapping is safe to re-run\n",
"bool_map = {\n",
" 'True': True, 'true': True, '1': True, 'Yes': True, True: True,\n",
" 'False': False, 'false': False, '0': False, 'No': False, False: False\n",
"}\n",
"\n",
"for col in ['is_weekend', 'label_is_high_quality', 'exam_period']:\n",
" df_clean[col] = df_clean[col].map(bool_map)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"for col in ['is_weekend', 'label_is_high_quality', 'exam_period']:\n",
" print(f\"{col}: {df_clean[col].value_counts().to_dict()} | nulls: {df_clean[col].isna().sum()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.2 — `is_bot_user`: case and whitespace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(df_clean['is_bot_user'].value_counts().to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean['is_bot_user'] = df_clean['is_bot_user'].str.strip().str.lower()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"print(df_clean['is_bot_user'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.3 — Categorical columns: case and whitespace"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(f'dominant_language unique before: {df_clean[\"dominant_language\"].nunique()}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Strip and lowercase for columns with pure case/whitespace variance\n",
"for col in ['dominant_language', 'editor', 'event_type']:\n",
" df_clean[col] = df_clean[col].str.strip().str.lower()\n",
"\n",
"# os: strip and lowercase, then merge win → windows\n",
"df_clean['os'] = (\n",
" df_clean['os']\n",
" .str.strip()\n",
" .str.lower()\n",
" .replace({'win': 'windows'})\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"for col in ['dominant_language', 'editor', 'os', 'event_type']:\n",
" print(f\"{col} ({df_clean[col].nunique()} unique): {sorted(df_clean[col].dropna().unique().tolist())}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.4 — `ci_status`: case, whitespace, and synonym merging"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(df_clean['ci_status'].value_counts().to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Step 1: strip and lowercase\n",
"# Step 2: merge 'failure' into 'failed'\n",
"# Rationale: both indicate the CI pipeline did not complete successfully.\n",
"# 'failed' is the more common and explicit term in CI tooling (GitHub Actions, Jenkins).\n",
"df_clean['ci_status'] = (\n",
" df_clean['ci_status']\n",
" .str.strip()\n",
" .str.lower()\n",
" .replace({'failure': 'failed'})\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"print(df_clean['ci_status'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Decision:** `failure` → `failed`. Both mean the CI pipeline did not complete successfully. `failed` is the canonical term used by major CI tools (GitHub Actions, Jenkins, GitLab CI) and is more explicit.\n",
"\n",
"---\n",
"\n",
"### 4.5 — `coverage_percent`: comma decimal separator, type conversion, and outliers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(f'dtype: {df_clean[\"coverage_percent\"].dtype}')\n",
"comma_rows = df_clean['coverage_percent'].astype(str).str.contains(',', na=False)\n",
"print(f'Rows with comma: {comma_rows.sum()}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix: replace comma, convert to float\n",
"df_clean['coverage_percent'] = (\n",
" df_clean['coverage_percent']\n",
" .astype(str)\n",
" .str.replace(',', '.', regex=False)\n",
" .replace('nan', float('nan'))\n",
" .astype(float)\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — also check for values outside the valid 0-100 range\n",
"print(f'dtype: {df_clean[\"coverage_percent\"].dtype}')\n",
"print(df_clean['coverage_percent'].describe().round(2))\n",
"print(f'\\nValues < 0: {(df_clean[\"coverage_percent\"] < 0).sum()} rows')\n",
"print(f'Values > 100: {(df_clean[\"coverage_percent\"] > 100).sum()} rows')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# coverage_percent must be in [0, 100] — values outside this range are logging errors\n",
"invalid_cov = (df_clean['coverage_percent'] < 0) | (df_clean['coverage_percent'] > 100)\n",
"df_clean.loc[invalid_cov, 'coverage_percent'] = float('nan')\n",
"print(f'Invalid coverage values set to NaN: {invalid_cov.sum()}')\n",
"print(f'Range after: {df_clean[\"coverage_percent\"].min():.1f} {df_clean[\"coverage_percent\"].max():.1f}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.6 — Missing values: decisions and strategy"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect missing counts\n",
"missing = df_clean.isnull().sum()\n",
"pct = (missing / len(df_clean) * 100).round(1)\n",
"pd.DataFrame({'missing': missing, '%': pct})[missing > 0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Investigate pr_merge_time_hours — which event types have non-null values?\n",
"print(df_clean.loc[df_clean['pr_merge_time_hours'].notna(), 'event_type'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Finding:** `pr_merge_time_hours` is only non-null for `pr_merged` and `pr_opened` events — exactly the rows where a merge time is meaningful. This is **structural missingness (MNAR — Missing Not At Random)**, not a data quality problem. Imputing or dropping these rows would destroy valid analytical signal. Keep as NaN.\n",
"\n",
"| Column | Decision | Rationale |\n",
"|---|---|---|\n",
"| `pr_merge_time_hours` | Keep NaN | Structural: only meaningful for PR events |\n",
"| `commit_message_length` | Keep NaN | Unclear cause — may be bot commits or merge commits without messages |\n",
"| `build_duration_s` | Keep NaN | Sporadic; likely CI jobs that did not reach the build phase |\n",
"| `time_to_ci_minutes` | Keep NaN | Sporadic; likely events that did not trigger CI |"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# All four columns: leave as NaN — no action needed\n",
"# (Documented above)\n",
"print('Missing value strategy: all four columns kept as NaN.')\n",
"print('No rows dropped.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.7 — Outliers and impossible values"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# A — Negative pr_merge_time_hours\n",
"neg_mask = df_clean['pr_merge_time_hours'] < 0\n",
"print(f'Negative pr_merge_time_hours: {neg_mask.sum()}')\n",
"print(df_clean.loc[neg_mask, ['event_type', 'pr_merge_time_hours']].head())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix A\n",
"df_clean.loc[neg_mask, 'pr_merge_time_hours'] = float('nan')\n",
"print(f'Negative values set to NaN: {neg_mask.sum()}')\n",
"print(f'New min: {df_clean[\"pr_merge_time_hours\"].min():.2f}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# B — tests_failed > tests_run (cross-column logical check)\n",
"impossible_mask = df_clean['tests_failed'] > df_clean['tests_run']\n",
"print(f'Rows where tests_failed > tests_run: {impossible_mask.sum()}')\n",
"print(df_clean.loc[impossible_mask, ['tests_run', 'tests_failed']].describe().round(1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix B — set tests_failed to NaN for impossible rows\n",
"# We do not touch tests_run — it may be correct; tests_failed is the unreliable value\n",
"df_clean.loc[impossible_mask, 'tests_failed'] = float('nan')\n",
"print(f'tests_failed set to NaN: {impossible_mask.sum()}')\n",
"# Verify: no remaining impossible rows\n",
"remaining = df_clean['tests_failed'] > df_clean['tests_run']\n",
"print(f'Remaining impossible rows: {remaining.sum()}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# C — lines_added and lines_deleted outliers\n",
"print('lines_added distribution:')\n",
"print(df_clean['lines_added'].describe().round(1))\n",
"print(f'\\nRows > 1000 lines added: {(df_clean[\"lines_added\"] > 1000).sum()}')\n",
"print(df_clean.loc[df_clean['lines_added'] > 1000,\n",
" ['event_type', 'lines_added', 'lines_deleted', 'dominant_language']].head(8).to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Decision: commits adding or deleting > 1000 lines are flagged as outliers.\n",
"# While large commits can be legitimate (adding a framework, vendoring dependencies),\n",
"# values of 5000 lines are extreme for a classroom context and likely logging errors.\n",
"# We set them to NaN rather than dropping — other columns in these rows remain valid.\n",
"threshold = 1000\n",
"large_add = df_clean['lines_added'] > threshold\n",
"large_del = df_clean['lines_deleted'] > threshold\n",
"\n",
"df_clean.loc[large_add, 'lines_added'] = float('nan')\n",
"df_clean.loc[large_del, 'lines_deleted'] = float('nan')\n",
"\n",
"print(f'lines_added outliers set to NaN: {large_add.sum()}')\n",
"print(f'lines_deleted outliers set to NaN: {large_del.sum()}')\n",
"print(f'\\nlines_added max after: {df_clean[\"lines_added\"].max()}')\n",
"print(f'lines_deleted max after: {df_clean[\"lines_deleted\"].max()}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.8 — `timestamp`: mixed datetime formats *(optional)*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# First pass — handles ISO 8601 formats\n",
"df_clean['timestamp'] = pd.to_datetime(df_clean['timestamp'], utc=True, errors='coerce')\n",
"print(f'timestamp dtype: {df_clean[\"timestamp\"].dtype}')\n",
"print(f'Unparsed (NaT) after first pass: {df_clean[\"timestamp\"].isna().sum()}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The remaining NaTs are DD/MM/YYYY format rows — apply a second pass\n",
"# using the systematic try_formats approach from Task 2\n",
"\n",
"def try_formats(series, formats):\n",
" result = pd.Series(pd.NaT, index=series.index, dtype='datetime64[ns, UTC]')\n",
" remaining = series.copy()\n",
" for fmt in formats:\n",
" parsed = pd.to_datetime(remaining, format=fmt, errors='coerce', utc=True)\n",
" resolved_idx = parsed.index[parsed.notna()]\n",
" result.loc[resolved_idx] = parsed.loc[resolved_idx]\n",
" remaining = remaining.drop(index=resolved_idx)\n",
" return result\n",
"\n",
"candidate_formats = [\n",
" '%d/%m/%Y %H:%M',\n",
" '%m/%d/%Y %H:%M',\n",
" '%d/%m/%Y',\n",
" '%m/%d/%Y',\n",
"]\n",
"\n",
"unparsed_idx = df_clean.index[df_clean['timestamp'].isna()]\n",
"raw_unparsed = df.loc[unparsed_idx, 'timestamp']\n",
"resolved = try_formats(raw_unparsed, candidate_formats)\n",
"df_clean.loc[unparsed_idx, 'timestamp'] = resolved\n",
"\n",
"print(f'Resolved in second pass: {resolved.notna().sum()}')\n",
"print(f'Still NaT (truly ambiguous): {df_clean[\"timestamp\"].isna().sum()}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 5 — Verify with D-Tale"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"d.kill()\n",
"d_clean = dtale.show(df_clean, host='127.0.0.1', subprocess=False, open_browser=False)\n",
"print('D-Tale (cleaned) running at:', d_clean._url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 6 — Before vs After with SweetViz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Exclude timestamp — SweetViz cannot compare string vs datetime64\n",
"exclude = ['timestamp']\n",
"compare = sv.compare(\n",
" [df.drop(columns=exclude), 'Raw'],\n",
" [df_clean.drop(columns=exclude).reset_index(drop=True), 'Cleaned']\n",
")\n",
"compare.show_html('sweetviz_git_comparison.html', open_browser=False)\n",
"print('Comparison report saved.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 7 — Save"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean.to_csv('dataset_D_git_classroom_activity_clean.csv', index=False)\n",
"print(f'Saved: {len(df_clean)} rows, {len(df_clean.columns)} columns')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Reflection — Suggested Answers\n",
"\n",
"**1. `pr_merge_time_hours` missing 71.7% — is this a data quality problem?** \n",
"No. The missingness is structural: only `pr_merged` and `pr_opened` events have a merge time by definition. Every other event type (commit, push, CI run, etc.) has no merge time to record. This is MNAR — Missing Not At Random — and the pattern itself carries meaning. Imputing or dropping these rows would be wrong.\n",
"\n",
"**2. What does the cross-column check reveal that single-column inspection misses?** \n",
"Single-column inspection of `tests_failed` shows values ranging from 0 to 245 — nothing obviously wrong. Single-column inspection of `tests_run` also looks normal. Only by comparing the two together does the logical impossibility appear: you cannot fail more tests than you ran. This is a category of data quality issue that automated profiling tools like SweetViz do not detect.\n",
"\n",
"**3. What knowledge beyond the data was needed for `ci_status`?** \n",
"Domain knowledge about CI systems: that `failed` and `failure` refer to the same pipeline outcome, and that `failed` is the conventional term in tools like GitHub Actions and Jenkins. Without this knowledge, a purely statistical analysis would treat them as two separate categories and silently undercount CI failures.\n",
"\n",
"**4. What was the same as Task 2? What was new?** \n",
"Same: boolean encoding chaos, case/whitespace inconsistency, comma decimal separator, structural missingness reasoning, negative value treatment, D-Tale navigation, SweetViz before/after. \n",
"New: synonym merging (`ci_status`), cross-column logical consistency check (`tests_failed > tests_run`), out-of-range numeric check (`coverage_percent` outside 0100). \n",
"Takeaway: the core cleaning patterns transfer across domains. What changes is the domain knowledge needed to make the decisions — which canonical form to use, what physical constraints apply to each variable, what constitutes a structurally justified missing value."
]
}
]
}

128
deploy/README.md Normal file
View File

@@ -0,0 +1,128 @@
# Lab 02 — Environment Setup
This document explains how to set up your Python environment and install all required packages before the lab session.
---
## Requirements
- **Python 3.10 or higher** (3.11 recommended or live wildly and go for the latest one. I have not tested it...)
- **pip** (comes bundled with Python)
- A code editor with Jupyter notebook support — [VS Code](https://code.visualstudio.com/) with the [Jupyter extension](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter) is recommended
---
## Step 1: Create a virtual environment
It is strongly recommended to work inside a virtual environment to avoid conflicts with other Python projects on your machine.
Open a terminal in the folder where you will work and run:
```bash
# Create the environment (only needed once)
python -m venv .venv
```
Then activate it:
```bash
# On Windows
.venv\Scripts\activate
# On macOS / Linux
source .venv/bin/activate
```
You should see `(.venv)` appear at the start of your terminal prompt. **You need to activate the environment every time you open a new terminal.**
---
## Step 2: Install required packages
With the environment active, run the following commands:
```bash
# Core data libraries
pip install "numpy<2.0"
pip install pandas matplotlib seaborn
# Automated EDA and profiling
pip install sweetviz
# Interactive dataframe explorer
pip install dtale
# Jupyter notebook support
pip install notebook ipykernel
```
> **Why `numpy<2.0`?** Several packages (including dtale and sweetviz) are not yet fully compatible with NumPy 2.x. Pinning to a 1.x version avoids runtime errors that can be difficult to diagnose.
Alternatively, you can install everything in a single command:
```bash
pip install "numpy<2.0" pandas matplotlib seaborn sweetviz dtale notebook ipykernel
```
---
## Step 3: Verify the installation
Run the following in a terminal (with the environment active) to confirm everything is working:
```bash
python -c "
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sweetviz as sv
import dtale
import numpy as np
print('numpy :', np.__version__)
print('pandas :', pd.__version__)
print('seaborn :', sns.__version__)
print('sweetviz: OK')
print('dtale : OK')
print('All packages installed successfully.')
"
```
---
## Step 4: D-Tale in VS Code (Windows)
D-Tale opens in a browser tab via a local server. On Windows, VS Code may not automatically forward the port if D-Tale binds to a network adapter other than the loopback address. All lab notebooks already include the correct launch code:
```python
d = dtale.show(df, host='127.0.0.1', subprocess=False, open_browser=False)
print('Open D-Tale at:', d._url)
```
If the URL does not open automatically, copy it from the output and paste it into your browser. If the page does not load, check the **Ports** panel at the bottom of VS Code and confirm port `40000` is being forwarded.
---
## Files for this lab
| File | Description |
|---|---|
| `lab01_task1_datasets.ipynb` | Task 1 — Datasaurus Dozen: why visualisation is essential |
| `lab01_task2_telemetry.ipynb` | Task 2 — Guided EDA and cleaning of game telemetry data |
| `lab01_task3_git_activity.ipynb` | Task 3 — Independent EDA and cleaning of Git classroom activity data |
| `datasaurus.csv` | Dataset for Task 1 |
| `dataset_A_indie_game_telemetry.csv` | Dataset for Task 2 |
| `dataset_D_git_classroom_activity.csv` | Dataset for Task 3 |
---
## Troubleshooting
**`ModuleNotFoundError` when running a notebook**
The notebook is using a different Python kernel, not the one from your virtual environment. In VS Code, click the kernel name in the top right of the notebook and select **Python (lab02)**.
**NumPy version conflict errors**
Make sure you installed `numpy<2.0` as described in Step 2. If you already have a newer version, downgrade with:
```bash
pip install "numpy<2.0" --force-reinstall
```

1847
deploy/datasaurus.csv Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,545 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "d321d996",
"metadata": {},
"outputs": [],
"source": [
"# 43679 -- Interactive Visualization\n",
"# 2025 - 2026\n",
"# 2nd semester\n",
"# Lab 1 - EDA (guided)\n",
"# ver 1.0 - 2026-02-20 Initial version\n",
"# ver 1.1 - 2026-02-23 Added more comments and explanations\n",
"# ver 1.2 - 2026-02-24 Added code for additional visualizations"
]
},
{
"cell_type": "markdown",
"id": "d44c354e",
"metadata": {},
"source": [
"# Lab 01<br>Task 1: Exploratory Data Analysis with Pandas & Seaborn\n",
"\n",
"This task serves two purposes. It introduces you to some of the basic tools to start understanding datasets and shows you why descriptive statistics may not be enough to understand the nature of a dataset.\n",
"\n",
"Also, this task also walks you through some basic visualizations of the datasets to show how the type of visualization matters when trying to understand the data.\n",
"\n",
"Additionally, this simple first task also serves the purpose of getting you acquainted with Jupyter notebooks.\n",
"\n",
"**Dataset:** `datasaurus.csv`\n",
"\n",
"---\n",
"\n",
"### Objectives\n",
"\n",
"By the end of this task you will be able to:\n",
"- Use `pandas` to inspect a dataset's structure, types, and summary statistics\n",
"- Apply grouped aggregations to compare subsets of data\n",
"- Use `seaborn` to produce scatter plots that reveal structure invisible to statistics\n",
"- Articulate *why* visualisation is an essential — not optional — step in data analysis\n",
"\n",
"---\n",
"\n",
"### Context\n",
"\n",
"The **Datasaurus Dozen** is a collection of 13 small datasets created by Matejka & Fitzmaurice (2017) to demonstrate a modern version of Anscombe's Quartet.\n",
"\n",
"This task will take you through the same journey a data analyst faces: you will start with raw numbers, run the usual summaries, and then discover, through visualisation, that numbers alone were hiding the story.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "350a4fd8",
"metadata": {},
"source": [
"## Part 1: Load and Inspect the Data\n",
"\n",
"Start by importing the libraries you need and loading the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed1a7a01",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import seaborn as sns\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# Configure plot style\n",
"sns.set_theme(style='whitegrid', palette='tab10')\n",
"plt.rcParams['figure.dpi'] = 100"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9cf77ef2",
"metadata": {},
"outputs": [],
"source": [
"# Load the dataset\n",
"df = pd.read_csv('datasaurus.csv')\n",
"\n",
"# Preview the first rows\n",
"df.head(10)"
]
},
{
"cell_type": "markdown",
"id": "a2e51209",
"metadata": {},
"source": [
"### 1.1. Structure and data types\n",
"\n",
"Before computing anything, always understand what you are working with."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a45f4e3",
"metadata": {},
"outputs": [],
"source": [
"# Shape of the dataset (rows, columns)\n",
"print('Shape:', df.shape)\n",
"\n",
"# Column names and data types\n",
"print('\\nDtypes:')\n",
"print(df.dtypes)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d01329b3",
"metadata": {},
"outputs": [],
"source": [
"# How many unique sub-datasets are there, and how many rows does each contain?\n",
"print('Unique datasets:', df['dataset'].nunique())\n",
"print('\\nRows per dataset:')\n",
"print(df['dataset'].value_counts())"
]
},
{
"cell_type": "markdown",
"id": "1545a53f",
"metadata": {},
"source": [
"### 1.2. Overall summary statistics\n",
"\n",
"Use `describe()` to get a global numerical summary of `x` and `y`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a92b670e",
"metadata": {},
"outputs": [],
"source": [
"# Summary statistics for the entire dataset\n",
"df[['x', 'y']].describe().round(2)"
]
},
{
"cell_type": "markdown",
"id": "16b1a9e3",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 2: Grouped Statistics: The Reveal\n",
"\n",
"The dataset column holds 13 different named groups. Let's compute summary statistics **per group** and see if the groups differ."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7693c95",
"metadata": {},
"outputs": [],
"source": [
"# Compute mean and standard deviation of x and y for each sub-dataset\n",
"grouped_stats = (\n",
" df.groupby('dataset')[['x', 'y']]\n",
" .agg(['mean', 'std'])\n",
" .round(2)\n",
")\n",
"\n",
"grouped_stats"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "837a2552",
"metadata": {},
"outputs": [],
"source": [
"# Also compute the Pearson correlation between x and y per group\n",
"correlation = df.groupby('dataset').apply(lambda g: g['x'].corr(g['y'])).round(2)\n",
"correlation.name = 'corr(x,y)'\n",
"print(correlation)"
]
},
{
"cell_type": "markdown",
"id": "c40be027",
"metadata": {},
"source": [
"> **Question:** Look at the table above. Are the 13 datasets statistically different from each other? \n",
"> Write your answer in the cell below before moving on.\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cc4c40dd",
"metadata": {},
"source": [
"<!-- ## Part 3: Now Let us Actually Look at the Data\n",
"\n",
"We will focus on three sub-datasets: **`dino`**, **`star`**, and **`bullseye`**. These three were chosen because they produce a dramatic visual contrast despite their identical statistics.\n",
"\n",
"Later, feel free to explore the remaining 10 groups. -->"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4fde0b1",
"metadata": {},
"outputs": [],
"source": [
"# Filter to the three focus datasets\n",
"focus = ['dino', 'star', 'bullseye']\n",
"df_focus = df[df['dataset'].isin(focus)].copy()\n",
"\n",
"print(f'Rows in subset: {len(df_focus)}')"
]
},
{
"cell_type": "markdown",
"id": "86d8b1b6",
"metadata": {},
"source": [
"### 3.1 — Individual scatter plots"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c2f4c527",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(1, 3, figsize=(15, 5), sharey=True)\n",
"\n",
"colors = sns.color_palette('tab10', 3)\n",
"\n",
"for ax, name, color in zip(axes, focus, colors):\n",
" subset = df_focus[df_focus['dataset'] == name]\n",
" ax.scatter(subset['x'], subset['y'], color=color, alpha=0.7, s=40, edgecolors='white', linewidths=0.4)\n",
" ax.set_title(name, fontsize=14, fontweight='bold')\n",
" ax.set_xlabel('x')\n",
" ax.set_ylabel('y')\n",
"\n",
"fig.suptitle('Same statistics, completely different data', fontsize=16, fontweight='bold', y=1.02)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "538ecb6f",
"metadata": {},
"source": [
"### 3.2 — Side-by-side with statistics overlay\n",
"\n",
"Let's add the mean and standard deviation annotations to make the point explicit."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d677b3ec",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(1, 3, figsize=(15, 5.5), sharey=True)\n",
"\n",
"for ax, name, color in zip(axes, focus, colors):\n",
" subset = df_focus[df_focus['dataset'] == name]\n",
" \n",
" ax.scatter(subset['x'], subset['y'], color=color, alpha=0.65, s=40,\n",
" edgecolors='white', linewidths=0.4, label='observations')\n",
" \n",
" # Mean crosshair\n",
" mx, my = subset['x'].mean(), subset['y'].mean()\n",
" ax.axvline(mx, color='black', linestyle='--', linewidth=1.0, alpha=0.6)\n",
" ax.axhline(my, color='black', linestyle='--', linewidth=1.0, alpha=0.6)\n",
" ax.scatter([mx], [my], color='black', s=80, zorder=5, label=f'mean ({mx:.1f}, {my:.1f})')\n",
" \n",
" # Stats box\n",
" stats_text = (\n",
" f\"mean x = {subset['x'].mean():.2f}\\n\"\n",
" f\"mean y = {subset['y'].mean():.2f}\\n\"\n",
" f\"sd x = {subset['x'].std():.2f}\\n\"\n",
" f\"sd y = {subset['y'].std():.2f}\\n\"\n",
" f\"corr = {subset['x'].corr(subset['y']):.2f}\"\n",
" )\n",
" ax.text(0.03, 0.97, stats_text, transform=ax.transAxes,\n",
" fontsize=8.5, verticalalignment='top', fontfamily='monospace',\n",
" bbox=dict(boxstyle='round,pad=0.4', facecolor='white', alpha=0.85, edgecolor='grey'))\n",
" \n",
" ax.set_title(name, fontsize=14, fontweight='bold')\n",
" ax.set_xlabel('x')\n",
" ax.set_ylabel('y')\n",
"\n",
"fig.suptitle('Datasaurus Dozen — statistics are identical, shapes are not',\n",
" fontsize=14, fontweight='bold', y=1.01)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "e295910e",
"metadata": {},
"source": [
"> **❓ Question:** What would a data analyst have concluded if they had only looked at the summary statistics table? \n",
"> What does this tell you about when and why visualisation is necessary?\n",
"\n",
"*(Double-click to write your answer here)*\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "86dea1fb",
"metadata": {},
"source": [
"## Part 4 — Small Multiples: All 13 Datasets at Once\n",
"\n",
"Seaborn's `FacetGrid` makes it easy to produce a *small multiples* plot — the same chart type repeated for each group. This is a powerful pattern for comparing distributions across many categories."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d7eb9f5a",
"metadata": {},
"outputs": [],
"source": [
"g = sns.FacetGrid(df, col='dataset', col_wrap=5, height=3, aspect=1.0,\n",
" sharex=False, sharey=False)\n",
"g.map(sns.scatterplot, 'x', 'y', alpha=0.6, s=18, color='steelblue', edgecolor='white', linewidth=0.2)\n",
"g.set_titles(col_template='{col_name}', size=10)\n",
"g.figure.suptitle('All 13 Datasaurus Dozen datasets — identical statistics',\n",
" fontsize=13, fontweight='bold', y=1.01)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "becc716d",
"metadata": {},
"source": [
"---\n",
"\n",
"## Some Exploration\n",
"\n",
"For each chart type below, run the cell and then answer the key question:\n",
"\n",
"> **Does this chart type reveal the structural differences between datasets, or does it hide them?**\n",
"\n",
"---\n",
"\n",
"### Histograms\n",
"\n",
"Plot the marginal distribution of `x` and `y` separately for each focus dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83a2bc01",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(2, 3, figsize=(15, 8), sharey=True)\n",
"\n",
"for col_idx, var in enumerate(['x', 'y']):\n",
" for ax, name, color in zip(axes[col_idx], focus, colors):\n",
" subset = df_focus[df_focus['dataset'] == name]\n",
" sns.histplot(subset[var], ax=ax, color=color, bins=15, kde=False)\n",
" ax.set_title(f'{name} — {var}', fontsize=12, fontweight='bold')\n",
" ax.set_xlabel(var)\n",
"\n",
"fig.suptitle('Histograms — marginal distributions of x and y per dataset',\n",
" fontsize=13, fontweight='bold', y=1.01)\n",
"plt.tight_layout()\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"id": "exploration_md",
"metadata": {},
"source": [
"> **Answer:** Partially. Histograms show the marginal distribution of one variable at a time, so they reveal that the datasets differ along each axis individually. But they lose all information about the *relationship* between x and y — you cannot see the dinosaur or the star from a histogram alone. They reveal more than summary statistics, but less than a scatterplot.\n",
"\n",
"---\n",
"\n",
"### KDE plots\n",
"\n",
"Overlay density curves for the three focus datasets on the same axis."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3cc44f9f",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
"\n",
"for ax, var in zip(axes, ['x', 'y']):\n",
" sns.kdeplot(data=df_focus, x=var, hue='dataset', ax=ax, fill=True, alpha=0.3, linewidth=1.5)\n",
" ax.set_title(f'KDE of {var} — three focus datasets', fontsize=12, fontweight='bold')\n",
" ax.set_xlabel(var)\n",
"\n",
"fig.suptitle('KDE plots — overlaid density curves per dataset',\n",
" fontsize=13, fontweight='bold')\n",
"plt.tight_layout()\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"id": "exploration_md",
"metadata": {},
"source": [
"> **Answer:** Same limitation as histograms — KDE plots show the marginal density of one variable at a time. The three curves look somewhat different from each other (especially for y), but you cannot reconstruct the actual shapes from them. The structural difference between dino, star, and bullseye is heavily underrepresented.\n",
"\n",
"---\n",
"\n",
"### Pair plots\n",
"\n",
"Plot all pairwise combinations of variables, coloured by dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "exploration_cell",
"metadata": {},
"outputs": [],
"source": [
"g = sns.pairplot(df_focus, hue='dataset', plot_kws={'alpha': 0.5, 's': 20},\n",
" diag_kind='kde', height=3.5)\n",
"g.figure.suptitle('Pair plot — dino, star, bullseye', fontsize=13,\n",
" fontweight='bold', y=1.01)\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"id": "exploration_md",
"metadata": {},
"source": [
"> **Answer:** Yes — the off-diagonal scatter plot (x vs y) fully reveals the structural differences, showing the dinosaur, star, and bullseye shapes clearly. The diagonal KDE plots add the marginal distributions. For a dataset with only two variables the pair plot is essentially a scatter plot with extras, but the pattern scales well to datasets with many variables.\n",
"\n",
"---\n",
"\n",
"### Box plots\n",
"\n",
"Summarise the distribution of `x` and `y` per dataset using box plots."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "exploration_cell",
"metadata": {},
"outputs": [],
"source": [
"fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
"\n",
"for ax, var in zip(axes, ['x', 'y']):\n",
" sns.boxplot(data=df_focus, x='dataset', y=var, ax=ax,\n",
" palette='tab10', width=0.5, linewidth=1.2)\n",
" ax.set_title(f'Box plot of {var} per dataset', fontsize=12, fontweight='bold')\n",
" ax.set_xlabel('dataset')\n",
" ax.set_ylabel(var)\n",
"\n",
"fig.suptitle('Box plots — do they reveal the structural differences?',\n",
" fontsize=13, fontweight='bold')\n",
"plt.tight_layout()\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"id": "exploration_md",
"metadata": {},
"source": [
"> **Answer:** No — and this is the most important result. The three box plots look nearly identical for both x and y: same median, same IQR, same whiskers. Box plots summarise only five statistics per group (min, Q1, median, Q3, max), so they suffer the same blindspot as the summary statistics table. The dinosaur, star, and bullseye are completely invisible. Some chart types hide structure rather than revealing it — and box plots are a prime example."
]
},
{
"cell_type": "markdown",
"id": "3c09cd29",
"metadata": {},
"source": [
"---\n",
"\n",
"## Key Takeaways\n",
"\n",
"- Summary statistics (mean, SD, correlation) can be completely identical across datasets with totally different structure\n",
"- Visualisation is not a finishing step — it is a **diagnostic step** that must happen early\n",
"- Different chart types reveal different aspects: scatterplots show point-level structure, histograms show marginal distributions, box plots summarise spread but can hide shape\n",
"- The small multiples pattern (FacetGrid) is a powerful way to compare many groups at a glance\n",
"\n",
"--> In **Task 2**, you will move to a real-world dataset with real problems — and discover that the \"hard work\" you just did manually can be partially automated."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,875 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "e28cb3de",
"metadata": {},
"outputs": [],
"source": [
"# 43679 -- Interactive Visualization\n",
"# 2025 - 2026\n",
"# 2nd semester\n",
"# Lab 1 - EDA (guided)\n",
"# ver 1.2\n",
"# 24022026 - Cosmetics; added rationale for task in scope of course"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Lab 02<br>Task 2: Guided EDA and Data Cleaning\n",
"\n",
"The purpose of this task you to introduce you to the basic steps of performing data preparation for a dataset with several illustrative quality issues. In most situations you already have the basic code to be run; in others, you need to infer from existing code to complete the step. What is important here is for you to be able to identify the issues, understand the tools and approaches that may help tackling them, and acquire a systematic way of thinking about data preparation. **This is something you will need to do for your final project**.\n",
"\n",
"**Don't just run the code. Understand why it is needed and what it is doing**\n",
"\n",
"**NOTE**: For those cells asking questions or with tables that can be filled, you can just double-click the cell and edit it with your answers and rationale\n",
"\n",
"**Dataset:** `dataset_A_indie_game_telemetry.csv`\n",
"\n",
"---\n",
"\n",
"### Objectives\n",
"\n",
"By the end of this task you will be able to:\n",
"- Use **SweetViz** to rapidly profile a dataset and identify issues\n",
"- Use **D-Tale** to navigate and inspect a dataframe interactively\n",
"- Use **pandas** to fix the most common categories of data quality problems\n",
"- Make and justify cleaning decisions rather than applying fixes mechanically\n",
"\n",
"### Tools and their roles in this task\n",
"\n",
"| Tool | Role |\n",
"|---|---|\n",
"| **SweetViz** | Automated profiling: generate a report, triage what needs fixing |\n",
"| **D-Tale** | Interactive navigation: browse rows, inspect value counts, confirm fixes visually |\n",
"| **pandas** | All actual cleaning: every transformation is explicit, reproducible code |\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1 — Setup and First Look"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import sweetviz as sv\n",
"import dtale\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load the raw dataset — do NOT clean anything yet\n",
"df = pd.read_csv('dataset_A_indie_game_telemetry.csv')\n",
"\n",
"print(f'Shape: {df.shape}')\n",
"df.head()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Column names and types as pandas inferred them\n",
"print(df.dtypes)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **⚠️ Notice:** Several columns that should be boolean (`crash_flag`, `is_featured_event`, `is_long_session`) or\n",
"> numeric (`purchase_amount`) have been inferred as `object`. This is your first signal that something is wrong.\n",
"\n",
"---\n",
"\n",
"## Part 2: Automated Profiling with SweetViz\n",
"\n",
"SweetViz generates a visual report for the entire dataset in one call. Think of it as a **triage tool** — it shows you *where* to look; the actual investigation and fixing happens afterwards."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate the profiling report (~3060 seconds)\n",
"report = sv.analyze(df)\n",
"report.show_html('sweetviz_raw_report.html', open_browser=True)\n",
"print('Report saved. Open sweetviz_raw_report.html in your browser.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Open the report and answer the following before moving on.\n",
"\n",
"| Question | Your finding |\n",
"|---|---|\n",
"| Which columns have missing values? Which has the most? | *...* |\n",
"| Which columns are shown as TEXT but should be boolean or numeric? | *...* |\n",
"| Are there numeric columns with suspicious ranges? | *...* |\n",
"| How many distinct values does `region` have? Does that seem right? | *...* |\n",
"| What is unusual about `purchase_amount`? | *...* |\n",
"\n",
"\n",
"\n",
"---\n",
"\n",
"## Part 3: Navigate and Inspect with D-Tale\n",
"\n",
"Before writing any cleaning code, use D-Tale to browse the raw data and *see* the problems with your own eyes. You will not clean anything here — D-Tale is your inspection tool.\n",
"\n",
"**Launch D-Tale:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This will open an interactive D-Tale session in your browser, allowing you to explore the raw dataset in more detail\n",
"# The subprocess=True argument allows D-Tale to run in a separate process, which can help avoid issues with Jupyter notebooks\n",
"# Otherwise, D-Tale would block the notebook until you close the D-Tale session, which is not ideal for interactive exploration\n",
"d = dtale.show(df, host='127.0.0.1', subprocess=True, open_browser=True)\n",
"print('=' * 50)\n",
"print('D-Tale is running.')\n",
"print('Open this URL in your browser:', d._url)\n",
"print('In VS Code: Ctrl+click the URL above.')\n",
"print('=' * 50)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspection checklist\n",
"\n",
"Use D-Tale to confirm each issue SweetViz flagged. For each column, click the column header → **Describe** to see value counts and distribution.\n",
"\n",
"| What to inspect | How to do it in D-Tale | What you should see |\n",
"|---|---|---|\n",
"| `crash_flag` unique values | Column header → Describe | 8 variants of True/False |\n",
"| `region` unique values | Column header → Describe | ~32 variants of 5 region names |\n",
"| `input_method` unique values | Column header → Describe | A typo: `controllr` |\n",
"| `purchase_amount` raw values | Sort column ascending | Some values use comma: `1,80` |\n",
"| `avg_fps` distribution | Column header → Describe | Max of 10,000 — clearly wrong |\n",
"| Missing values overview | Top menu → Describe (all columns) | `gpu_model` dominates |\n",
"\n",
"<br>\n",
"\n",
"> Once you have seen the problems in the raw data, come back to the notebook for cleaning.\n",
"\n",
"---\n",
"\n",
"## Part 4: Clean with Pandas\n",
"\n",
"We will work through seven issue categories. Each section follows the same pattern:\n",
"1. **Inspect** — confirm the problem in code\n",
"2. **Fix** — apply the pandas transformation\n",
"3. **Verify** — check the result\n",
"\n",
"We work on a copy of the original dataframe so the raw data is always available for comparison."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Always work on a copy — keep df as the unchanged original\n",
"df_clean = df.copy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.1. Boolean columns: inconsistent encoding\n",
"\n",
"Three columns (`crash_flag`, `is_featured_event`, `is_long_session`) each have **8 different representations** of the same two values: `True`, `False`, `true`, `false`, `1`, `0`, `Yes`, `No`.\n",
"\n",
"The fix is to define an explicit mapping and apply it with `.map()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — confirm the problem\n",
"print('crash_flag unique values:', sorted(df_clean['crash_flag'].dropna().unique()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define the mapping for replacements\n",
"# Why did I place True:True and False: False? Ideas?\n",
"\n",
"bool_map = {\n",
" 'True': True, 'true': True, '1': True, 'Yes': True, True: True,\n",
" 'False': False, 'false': False, '0': False, 'No': False, False: False\n",
"}\n",
"\n",
"df_clean['crash_flag'] = df_clean['crash_flag'].map(bool_map)\n",
"\n",
"print('crash_flag after mapping:')\n",
"print(df_clean['crash_flag'].value_counts())\n",
"print('Nulls:', df_clean['crash_flag'].isna().sum())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TO DO:\n",
"# Apply the same mapping to the other two boolean columns\n",
"# Follow the same pattern as above for is_featured_event and is_long_session\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.2. Categorical columns: case and whitespace inconsistency\n",
"\n",
"Four columns have values that are logically identical but differ in case or surrounding whitespace:\n",
"- `region` — 32 variants of 5 values (e.g. `us-west`, `US-WEST`, `Us-west`, `' us-west '`)\n",
"- `map_name` — 36 variants of 6 values\n",
"- `platform` — 32 variants of 6 values\n",
"- `input_method` — 30 variants, including a **typo**: `controllr`\n",
"\n",
"The fix uses pandas string methods: `.str.strip()` removes surrounding whitespace, `.str.lower()` normalises case. They can be chained."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — how many unique values before cleaning?\n",
"print('region unique before:', df_clean['region'].unique())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix region: strip whitespace and convert to lowercase\n",
"df_clean['region'] = df_clean['region'].str.strip().str.lower()\n",
"\n",
"# Verify\n",
"print('region unique after:', df_clean['region'].unique())\n",
"print(df_clean['region'].value_counts())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# TO DO: \n",
"# Apply the same strip + lower to map_name and platform\n",
"# Follow the same pattern as above\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# input_method needs an extra step: fix the typo and standardise kb/m → kbm\n",
"\n",
"# Step 0: Inspect\n",
"print('input_method unique before:', df_clean['input_method'].unique())\n",
"\n",
"# Step 1: strip and lowercase first\n",
"df_clean['input_method'] = df_clean['input_method'].str.strip().str.lower()\n",
"\n",
"# Step 2: fix the two inconsistencies with replace()\n",
"df_clean['input_method'] = df_clean['input_method'].replace({\n",
" 'controllr': 'controller', \n",
" 'kb/m': 'kbm' \n",
"})\n",
"\n",
"# Verify — should now show exactly 3 unique values\n",
"print('input_method unique after:', df_clean['input_method'].unique())\n",
"print(df_clean['input_method'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.3. `purchase_amount`: comma as decimal separator\n",
"\n",
"About 12% of rows use a comma instead of a decimal point (`1,80` instead of `1.80`). This prevented pandas from reading the column as numeric, so it was loaded as `object`.\n",
"\n",
"The fix: replace the comma in the string, then convert the column type."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — how many rows have a comma?\n",
"comma_rows = df_clean['purchase_amount'].astype(str).str.contains(',', na=False)\n",
"print(f'Rows with comma separator: {comma_rows.sum()}')\n",
"print('Examples:', df_clean.loc[comma_rows, 'purchase_amount'].unique()[:6])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix: replace comma with decimal point, then convert to float\n",
"df_clean['purchase_amount'] = (\n",
" df_clean['purchase_amount']\n",
" .astype(str) # ensure we are working with strings\n",
" .str.replace(',', '.', regex=False) # swap the separator\n",
" .replace('nan', float('nan')) # restore actual NaN rows\n",
" .astype(float) # convert to numeric\n",
")\n",
"\n",
"# Verify\n",
"print('dtype:', df_clean['purchase_amount'].dtype)\n",
"print(df_clean['purchase_amount'].describe().round(2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.4. Missing values: decisions and strategy\n",
"\n",
"Not all missing values are the same. Before deciding what to do, you need to understand *why* the value is missing — the reason determines the correct action.\n",
"\n",
"| Column | Missing | Why | Decision |\n",
"|---|---|---|---|\n",
"| `gpu_model` | 66.7% | Console/mobile players have no GPU | Keep column — missingness is meaningful |\n",
"| `build_version` | 16.5% | Not logged in older sessions | Keep as NaN — valid historical absence |\n",
"| `device_temp_c` | 4.9% | Sensor not available on some devices | Keep as NaN |\n",
"| `session_length_s` | 1.0% | Session ended abnormally | Drop missing rows now; fix negatives/outliers after datetime correction (section 4.6) |\n",
"| `ping_ms`, `purchase_amount`, `end_time` | < 2% | Sporadic gaps | Keep as NaN |\n",
"\n",
"<br>\n",
"\n",
"> **⚠️ Context always matters.** There is no universal rule for missing values. The decisions above are reasonable for this dataset and analytical goal, but a different context might lead to different choices.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — missing value counts across all columns\n",
"missing = df_clean.isnull().sum()\n",
"missing_pct = (missing / len(df_clean) * 100).round(1)\n",
"pd.DataFrame({'missing': missing, '%': missing_pct})[missing > 0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# session_length_s: drop rows where it is missing\n",
"# Rationale: session duration is a core metric — a session with no recorded\n",
"# duration is structurally incomplete and cannot be used for most analyses.\n",
"# These 98 rows represent <1% of the dataset, so dropping is safe.\n",
"\n",
"rows_before = len(df_clean)\n",
"df_clean = df_clean.dropna(subset=['session_length_s'])\n",
"\n",
"print(f'Rows dropped: {rows_before - len(df_clean)}')\n",
"print(f'Rows remaining: {len(df_clean)}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.5. Outliers: `avg_fps`\n",
"\n",
"The `avg_fps` column has a maximum of 10,000 fps — physically impossible for a game running in real time. The 75th percentile is ~82 fps, confirming that 10,000 is a logging error, not an extreme but plausible value.\n",
"\n",
"**Decision:** set values above 300 fps to `NaN` rather than dropping the entire row. The rest of the data in those rows (crash flag, purchase amount, session type) is likely still valid — it would be wasteful to discard it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — how many rows are affected?\n",
"threshold = 300\n",
"outlier_mask = df_clean['avg_fps'] > threshold\n",
"print(f'Rows with avg_fps > {threshold}: {outlier_mask.sum()}')\n",
"print('\\navg_fps distribution (before fix):')\n",
"print(df_clean['avg_fps'].describe().round(1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix: set outlier values to NaN using .loc with a boolean mask\n",
"df_clean.loc[outlier_mask, 'avg_fps'] = float('nan')\n",
"\n",
"# Verify — max should now be well below 300\n",
"print('avg_fps distribution (after fix):')\n",
"print(df_clean['avg_fps'].describe().round(1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.6. Datetime columns: mixed formats\n",
"\n",
"The `start_time` and `end_time` columns contain timestamps in at least four different formats:\n",
"\n",
"```\n",
"2025-07-18T18:32:00Z : ISO 8601 with UTC marker\n",
"2025-07-18 20:03:21-05:00 : ISO 8601 with UTC offset\n",
"20/10/2025 02:49 : European DD/MM/YYYY\n",
"08/01/2025 06:35 : Ambiguous: US MM/DD or European DD/MM?\n",
"```\n",
"\n",
"Mixed datetime formats are one of the most complex cleaning problems because some ambiguities cannot be resolved automatically -- `08/01/2025` could be August 1st or January 8th, and no algorithm can determine which without external context.\n",
"\n",
"> **Connection to `session_length_s`:** The negative values and extreme outliers we saw earlier in `session_length_s` are not independent errors -- they are a *consequence* of this datetime problem. When `start_time` and `end_time` were recorded in different formats and misinterpreted, the pre-computed duration came out wrong. After fixing the timestamps, we will recompute `session_length_s` from scratch and validate the result.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — what does start_time actually look like?\n",
"print('Sample values from start_time:')\n",
"print(df_clean['start_time'].dropna().sample(8, random_state=42).tolist())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix: pd.to_datetime with utc=True normalises all timezone-aware formats to UTC.\n",
"# errors='coerce' converts anything it cannot parse to NaT (Not a Time) instead of crashing.\n",
"df_clean['start_time'] = pd.to_datetime(df_clean['start_time'], utc=True, errors='coerce')\n",
"df_clean['end_time'] = pd.to_datetime(df_clean['end_time'], utc=True, errors='coerce')\n",
"\n",
"# Verify — check how many rows could not be parsed\n",
"print('start_time dtype:', df_clean['start_time'].dtype)\n",
"print('Unparsed start_time (NaT):', df_clean['start_time'].isna().sum())\n",
"print('Unparsed end_time (NaT): ', df_clean['end_time'].isna().sum())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Recompute session_length_s from the corrected timestamps\n",
"# Now that start_time and end_time are both timezone-aware UTC datetimes,\n",
"# the subtraction is unambiguous. We convert the result to seconds.\n",
"df_clean['session_length_s'] = (\n",
" df_clean['end_time'] - df_clean['start_time']\n",
").dt.total_seconds()\n",
"\n",
"print('session_length_s after recomputation:')\n",
"print(df_clean['session_length_s'].describe().round(1))\n",
"print(f'\\nNegative values: {(df_clean[\"session_length_s\"] < 0).sum()}')\n",
"print(f'> 8h (28800s): {(df_clean[\"session_length_s\"] > 28800).sum()}')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Any remaining negative values are rows where timestamps were genuinely\n",
"# ambiguous and could not be resolved -- the computed duration is meaningless.\n",
"# Set them to NaN rather than dropping the row.\n",
"\n",
"neg_mask = df_clean['session_length_s'] < 0\n",
"df_clean.loc[neg_mask, 'session_length_s'] = float('nan')\n",
"print(f'Negative durations set to NaN: {neg_mask.sum()}')\n",
"\n",
"# Values above 8 hours (28800s) are suspicious for a game session.\n",
"# Inspect them before deciding.\n",
"\n",
"long_mask = df_clean['session_length_s'] > 28800\n",
"print(f'\\nSessions > 8h: {long_mask.sum()}')\n",
"print(df_clean.loc[long_mask, ['session_length_s', 'start_time', 'end_time']].head(5).to_string())\n",
"\n",
"# Decision: sessions > 8h are almost certainly logging errors (game left running,\n",
"# server not recording session end). Set to NaN.\n",
"# As always — this threshold is a judgement call that depends on the game and context.\n",
"df_clean.loc[long_mask, 'session_length_s'] = float('nan')\n",
"print(f'\\nSessions > 8h set to NaN: {long_mask.sum()}')\n",
"print('\\nFinal session_length_s distribution:')\n",
"print(df_clean['session_length_s'].describe().round(1))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Note:** The number of NaT values above reflects rows where pandas could not parse the format unambiguously. These are not errors in the code — they are genuinely ambiguous records that require a domain decision to resolve (e.g., knowing that the data source always uses DD/MM/YYYY).\n",
"\n",
"---\n",
"\n",
"** **OPTIONAL** — explore the unparsed rows**\n",
"\n",
"If you want to go further, the cells below help you examine which formats failed and attempt a two-pass parsing strategy. This is optional and not required to complete the lab.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# OPTIONAL — Step 1: inspect the unparsed rows\n",
"# We use the index of df_clean (not a boolean mask) to look up raw values in df,\n",
"# since the two dataframes have different lengths after the dropna() in step 4.4.\n",
"unparsed_idx = df_clean.index[df_clean['start_time'].isna()]\n",
"raw_start = df.loc[unparsed_idx, 'start_time'].dropna()\n",
"\n",
"print(f'Rows still unparsed: {len(unparsed_idx)}')\n",
"print('\\nSample raw values:')\n",
"print(raw_start.unique()[:12])\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# OPTIONAL: Step 2: define a systematic multi-format parser\n",
"#\n",
"# Rather than guessing with dayfirst=True, we try explicit format strings\n",
"# in sequence and stop as soon as one succeeds for each row.\n",
"# This is precise and transparent — no silent inference.\n",
"\n",
"def try_formats(series, formats):\n",
" \"\"\"Try explicit datetime format strings in order.\n",
" Returns a UTC-aware Series; rows that match no format remain NaT.\"\"\"\n",
" result = pd.Series(pd.NaT, index=series.index, dtype='datetime64[ns, UTC]')\n",
" remaining = series.copy()\n",
" for fmt in formats:\n",
" parsed = pd.to_datetime(remaining, format=fmt, errors='coerce', utc=True)\n",
" resolved_idx = parsed.index[parsed.notna()] # use index labels, not boolean mask\n",
" result.loc[resolved_idx] = parsed.loc[resolved_idx]\n",
" remaining = remaining.drop(index=resolved_idx) # drop resolved rows by label\n",
" return result\n",
"\n",
"# Format strings to try, in order of specificity\n",
"# DD/MM/YYYY is tried before MM/DD/YYYY because values where day > 12\n",
"# can only be DD/MM — those are unambiguous and should be resolved first.\n",
"# Values where day <= 12 will match both formats; the first one wins.\n",
"# Those cases are genuinely ambiguous — we flag them separately below.\n",
"candidate_formats = [\n",
" '%d/%m/%Y %H:%M', # European with time: 20/10/2025 14:30\n",
" '%m/%d/%Y %H:%M', # US with time: 10/20/2025 14:30\n",
" '%d/%m/%Y', # European date only: 20/10/2025\n",
" '%m/%d/%Y', # US date only: 10/20/2025\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# OPTIONAL: Step 3: apply the systematic parser to unparsed rows\n",
"raw_start = df.loc[unparsed_idx, 'start_time']\n",
"raw_end = df.loc[unparsed_idx, 'end_time']\n",
"\n",
"resolved_start = try_formats(raw_start, candidate_formats)\n",
"resolved_end = try_formats(raw_end, candidate_formats)\n",
"\n",
"df_clean.loc[unparsed_idx, 'start_time'] = resolved_start\n",
"df_clean.loc[unparsed_idx, 'end_time'] = resolved_end\n",
"\n",
"print(f'Resolved in second pass: {resolved_start.notna().sum()}')\n",
"print(f'Still NaT (truly ambiguous): {df_clean[\"start_time\"].isna().sum()}')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# OPTIONAL: Step 4: inspect truly ambiguous rows\n",
"# These are rows where day <= 12, making both DD/MM and MM/DD valid.\n",
"# No algorithm can resolve them without knowing the data source convention.\n",
"# They remain NaT — do not silently guess.\n",
"still_nat_idx = df_clean.index[df_clean['start_time'].isna()]\n",
"if len(still_nat_idx) > 0:\n",
" print('Truly ambiguous timestamps (cannot resolve without domain knowledge):')\n",
" print(df.loc[still_nat_idx, ['start_time', 'end_time']].head(10).to_string())\n",
"else:\n",
" print('All timestamps resolved.')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# OPTIONAL: Step 5: recompute session_length_s with the newly resolved timestamps\n",
"# More rows now have valid start_time and end_time, so more durations can be recovered.\n",
"df_clean['session_length_s'] = (\n",
" df_clean['end_time'] - df_clean['start_time']\n",
").dt.total_seconds()\n",
"\n",
"# Re-apply the same validation as before\n",
"neg_mask = df_clean['session_length_s'] < 0\n",
"long_mask = df_clean['session_length_s'] > 28800\n",
"df_clean.loc[neg_mask | long_mask, 'session_length_s'] = float('nan')\n",
"\n",
"print('session_length_s after second-pass recomputation:')\n",
"print(df_clean['session_length_s'].describe().round(1))\n",
"print(f'\\nNaN values: {df_clean[\"session_length_s\"].isna().sum()}')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"</details>\n",
"\n",
"---\n",
"\n",
"## Part 5: Verify with D-Tale\n",
"\n",
"Reload the cleaned dataframe into D-Tale and visually confirm the fixes. This is a quick sanity check — you are looking for anything that looks wrong before committing to the cleaned dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Shut down the previous D-Tale instance and reload with the clean data\n",
"d.kill()\n",
"d_clean = dtale.show(df_clean, host='127.0.0.1', subprocess=True, open_browser=True)\n",
"print('Open cleaned data in D-Tale:', d_clean._url)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In D-Tale, verify the following:\n",
"\n",
"| Column | What to check | Expected result |\n",
"|---|---|---|\n",
"| `crash_flag` | Describe → value counts | Only `True` and `False` |\n",
"| `region` | Describe → value counts | Exactly 5 values, all lowercase |\n",
"| `input_method` | Describe → value counts | Exactly 3 values, no `controllr` |\n",
"| `purchase_amount` | Describe → dtype and range | float64, no commas |\n",
"| `avg_fps` | Describe → max | Below 300 |\n",
"| `session_length_s` | Describe → min and max | No negatives, no values > 28800 |\n",
"| `start_time` | Describe → dtype | datetime64 |\n",
"\n",
"## Part 6: Compare initial and clean datasets with SweetViz"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8f0e03a",
"metadata": {},
"outputs": [],
"source": [
"# Debug code; sometimes, sweetviz is not able to compare columns due to data type changes that are incompatible\n",
"# This code just goes around column by column to identify any column that gives an error. Otherwise, SweetViz\n",
"# just crashes without any major explanation\n",
"\n",
"# Test comparison column by column\n",
"# for col in df_clean.columns:\n",
"# try:\n",
"# sv.compare([df[[col]], 'Raw'], [df_clean[[col]].reset_index(drop=True), 'Cleaned'])\n",
"# except Exception as e:\n",
"# print(f\"FAIL: {col} — {e}\")\n",
"# else:\n",
"# print(f\"ok: {col}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compare both versions of the dataset using SweetViz... \n",
"# Not perfect, but some basic information (e.g., works bad with booleans vs categorical in crash_flag)\n",
"# needed to exclude these two because we converted them to datetime and sweetviz is not able to compare it with the original data types\n",
"\n",
"exclude = ['start_time', 'end_time'] \n",
"\n",
"compare = sv.compare(\n",
" [df.drop(columns=exclude), 'Raw'],\n",
" [df_clean.drop(columns=exclude).reset_index(drop=True), 'Cleaned']\n",
")\n",
"compare.show_html('sweetviz_comparison_report.html', open_browser=True)\n",
"print('Comparison report saved.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the comparison report, check that:\n",
"- Boolean columns changed from TEXT → BOOL with only 2 distinct values\n",
"- Categorical columns show dramatically reduced DISTINCT counts\n",
"- `purchase_amount` changed from TEXT → NUMERIC\n",
"- `avg_fps` maximum is no longer 10,000\n",
"- `session_length_s` shows 0 missing\n",
"\n",
"---\n",
"\n",
"## Part 7: Save the Cleaned Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean.to_csv('dataset_A_indie_game_telemetry_clean.csv', index=False)\n",
"print(f'Saved: {len(df_clean)} rows, {len(df_clean.columns)} columns')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Key Takeaways\n",
"\n",
"**Three tools, three roles — they complement each other:**\n",
"- **SweetViz** surfaces issues fast but cannot fix them: use it for triage and validation\n",
"- **D-Tale** lets you see the data as a human would: use it to understand problems before and after fixing them\n",
"- **pandas** is where all actual cleaning happens: explicit, reproducible, and version-controllable\n",
"\n",
"**Cleaning decisions are not mechanical:**\n",
"- Dropping `session_length_s` nulls was justified here: it would not be in every context\n",
"- Setting `avg_fps` outliers to NaN (not dropping rows) preserved valid data in other columns\n",
"- `gpu_model` missingness is structurally meaningful: imputing it would destroy information\n",
"\n",
"**Common issue categories you have now fixed with pandas:**\n",
"\n",
"| Issue | pandas approach |\n",
"|---|---|\n",
"| Boolean encoding chaos | `.map(bool_map)` |\n",
"| Case / whitespace inconsistency | `.str.strip().str.lower()` |\n",
"| Typos in categories | `.replace({'controllr': 'controller'})` |\n",
"| Wrong decimal separator | `.str.replace(',', '.')` + `.astype(float)` |\n",
"| Structural missing values | `dropna(subset=[...])` with explicit rationale |\n",
"| Outliers | Boolean mask + `.loc[mask, col] = NaN` |\n",
"| Mixed datetime formats | `pd.to_datetime(utc=True, errors='coerce')` |\n"
]
},
{
"cell_type": "markdown",
"id": "572f9d85",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,711 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "92169b19",
"metadata": {},
"outputs": [],
"source": [
"# 43679 -- Interactive Visualization\n",
"# 2025 - 2026\n",
"# 2nd semester\n",
"# Lab 1 - EDA (independent)\n",
"# ver 1.1\n",
"# 24022026 - Added questions at end; cleaning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lab 01<br>Task 3: Independent EDA and Cleaning\n",
"\n",
"The purpose of this task is for you to practice EDA for a new dataset in a more independent manner. Feel free to go back to Task 2's code and reuse it, whenever it makes sense. Nevertheless, **don't limit yourself to just copy-pasting** and undersstand why you are applying each step. Understanding what are the issues and how to address them will be important for your final project.\n",
"\n",
"**Dataset:** `dataset_D_git_classroom_activity.csv`\n",
"\n",
"---\n",
"\n",
"### Context\n",
"\n",
"You have been handed an activity log from a Git-based classroom platform. It records **10,000 events** -- commits, pull requests, CI runs, code reviews, and test runs -- generated by students and bots across multiple repositories.\n",
"\n",
"Your goal is to apply the same EDA and cleaning pipeline from Task 2 to this new dataset. This time the guidance is lighter: each section tells you *what* to look for and *which tools and methods to use*, but the code is yours to write.\n",
"\n",
"### Pipeline reminder\n",
"\n",
"| Step | Tool | Goal |\n",
"|---|---|---|\n",
"| 1 — Load and inspect | pandas | Understand structure and inferred types |\n",
"| 2 — Automated profiling | SweetViz | Triage issues across all columns |\n",
"| 3 — Navigate and inspect | D-Tale | See problems with your own eyes |\n",
"| 4 — Clean | pandas | Fix each issue with explicit, reproducible code |\n",
"| 5 — Verify | D-Tale + SweetViz | Confirm fixes landed correctly |\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Load and Inspect"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import sweetviz as sv\n",
"import dtale\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv('dataset_D_git_classroom_activity.csv')\n",
"\n",
"# Inspect shape, column types, and first rows\n",
"# Use: df.shape, df.dtypes, df.head()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **What to note:** Which columns were inferred as `object` but should be boolean or numeric? Any column that should be numeric but is `object` almost always signals a formatting problem in the raw values.\n",
"\n",
"---\n",
"\n",
"## Part 2: Automated Profiling with SweetViz\n",
"\n",
"Generate a SweetViz report on the raw dataset. Use it to fill in the triage checklist below before moving on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate the SweetViz report\n",
"# Use: sv.analyze(df)\n",
"# Save to 'sweetviz_git_raw.html'\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Triage checklist\n",
"\n",
"| Question | Your finding |\n",
"|---|---|\n",
"| Which columns have missing values? Which has the most, and by how much? | *...* |\n",
"| Which columns are shown as TEXT but should be boolean? | *...* |\n",
"| Which columns are shown as TEXT but should be numeric? | *...* |\n",
"| How many distinct values does `event_type` have? Does that seem right? | *...* |\n",
"| What is unusual about `ci_status` distinct values compared to `event_type`? | *...* |\n",
"| Are there numeric columns with suspicious ranges? | *...* |\n",
"\n",
"*(Double-click to fill in your answers)*\n",
"\n",
"---\n",
"\n",
"## Part 3: Navigate and Inspect with D-Tale\n",
"\n",
"Launch D-Tale and use it to confirm each issue visually. Do not clean anything here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Launch D-Tale\n",
"# Use: dtale.show(df, host='127.0.0.1', subprocess=False, open_browser=False)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspection checklist\n",
"\n",
"For each item, use D-Tale's **column header → Describe** to inspect value counts and distribution.\n",
"\n",
"| What to inspect | What you should find |\n",
"|---|---|\n",
"| `is_weekend` unique values | 8 representations of True/False |\n",
"| `event_type` unique values | Many case/whitespace variants of 7 event types |\n",
"| `ci_status` unique values | Case/whitespace variants — but also: are FAILED and FAILURE the same thing? |\n",
"| `os` unique values | WIN, Windows, win — which is the canonical form? |\n",
"| `coverage_percent` raw values | Some use comma as decimal separator |\n",
"| `pr_merge_time_hours` missing % | Very high — is this random or structural? |\n",
"| `tests_failed` vs `tests_run` | Sort `tests_failed` descending — are there rows where it exceeds `tests_run`? |\n",
"| `lines_added` distribution | Any extreme values? |\n",
"| `pr_merge_time_hours` min | Any negative values? |\n",
"| `commit_message_length` min | Any zero values? What would a zero-length commit message mean? |\n",
"\n",
"<br>\n",
"\n",
"> **Note on `pr_merge_time_hours`:** Think carefully about why this column has so many missing values before deciding what to do. Look at the `event_type` column for rows where it is missing -- does a pattern emerge?\n",
"\n",
"*(Record any additional observations below)*\n",
"\n",
"---\n",
"\n",
"## Part 4: Clean with Pandas\n",
"\n",
"Work through each issue below. For each one: **inspect --> fix --> verify**. \n",
"The first example in each category is more detailed; subsequent columns follow the same pattern.\n",
"\n",
"Start by creating a working copy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean = df.copy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.1. Boolean columns\n",
"\n",
"**Columns:** `is_weekend`, `label_is_high_quality`, `exam_period` \n",
"**Issue:** 8 different representations of True/False \n",
"**Approach:** `.map()` with an explicit dictionary, same as Task 2 \n",
"\n",
"> **Hint:** Define the `bool_map` dictionary once and reuse it for all three columns. Include both string and boolean keys to make the mapping safe to re-run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(sorted(df_clean['is_weekend'].dropna().unique().tolist()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix is_weekend, label_is_high_quality, exam_period\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — each column should have only True and False, 0 nulls\n",
"for col in ['is_weekend', 'label_is_high_quality', 'exam_period']:\n",
" print(f\"{col}: {df_clean[col].value_counts().to_dict()} | nulls: {df_clean[col].isna().sum()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.2. `is_bot_user`: case and whitespace\n",
"\n",
"**Issue:** 6 variants of 2 values (`Human`, `Bot`) with mixed case and whitespace \n",
"**Approach:** `.str.strip().str.lower()` — no typos, no synonym merging needed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(df_clean['is_bot_user'].value_counts().to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix is_bot_user\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — should show exactly 2 values: 'human' and 'bot'\n",
"print(df_clean['is_bot_user'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.3. Categorical columns: case and whitespace\n",
"\n",
"**Columns:** `dominant_language`, `editor`, `os`, `event_type` \n",
"**Issue:** Many case/whitespace variants — strip and lowercase resolves most \n",
"\n",
"> **Note on `os`:** After stripping and lowercasing you will still have `win` and `windows` as separate values. Decide on a canonical form and merge them with `.replace()`.\n",
"\n",
"> **Note on `event_type`:** After stripping and lowercasing, verify the number of unique values matches the number of distinct event types you expect."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect dominant_language before\n",
"print(f'dominant_language unique before: {df_clean[\"dominant_language\"].nunique()}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix dominant_language — strip and lowercase\n",
"# Your code here\n",
"\n",
"# Apply the same to editor and event_type\n",
"# Your code here\n",
"\n",
"# Fix os — strip, lowercase, then merge win/windows variants\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"for col in ['dominant_language', 'editor', 'os', 'event_type']:\n",
" print(f\"{col} ({df_clean[col].nunique()} unique): {sorted(df_clean[col].dropna().unique().tolist())}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.4. `ci_status`: case, whitespace, and synonym merging\n",
"\n",
"**Issue:** Case and whitespace variants — but also `FAILED` and `FAILURE` represent the same outcome and need to be merged into one canonical value. \n",
"**Approach:** Strip and lowercase first, then use `.replace()` to merge synonyms.\n",
"\n",
"> **Decision to make:** After lowercasing, you will have `failed` and `failure` as separate values. Pick one as the canonical form and justify your choice in a markdown cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(df_clean['ci_status'].value_counts().to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix ci_status — strip, lowercase, then merge synonyms\n",
"# You can use .replace({'current':'replaced'})\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — should show exactly 4 values: success, failed, cancelled + your merged form\n",
"print(df_clean['ci_status'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Your decision:** Which canonical form did you choose for `failed`/`failure`, and why? This is where you need to go for the domain context. What is the common term?\n",
"\n",
"*(Double-click to write your answer)*\n",
"\n",
"---\n",
"\n",
"### 4.5. `coverage_percent`: comma decimal separator and type conversion\n",
"\n",
"**Issue:** Loaded as `object` — some values use a comma instead of a decimal point \n",
"**Approach:** Same as `purchase_amount` in Task 2 — `.str.replace()` then `.astype(float)`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — how many rows have a comma?\n",
"print(df_clean['coverage_percent'].dtype)\n",
"comma_rows = df_clean['coverage_percent'].astype(str).str.contains(',', na=False)\n",
"print(f'Rows with comma: {comma_rows.sum()}')\n",
"\n",
"# tip: any values outside the valid range? \n",
"# What is the valid range for this variable?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix coverage_percent\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"\n",
"print(f'dtype: {df_clean[\"coverage_percent\"].dtype}')\n",
"print(df_clean['coverage_percent'].describe().round(2))\n",
"print(f'\\nValues < 0: {(df_clean[\"coverage_percent\"] < 0).sum()} rows')\n",
"print(f'Values > 100: {(df_clean[\"coverage_percent\"] > 100).sum()} rows')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.6. Missing values: decisions and strategy\n",
"\n",
"This dataset has four columns with missing values. Inspect each one and decide what to do.\n",
"\n",
"| Column | Missing | Your hypothesis for why | Your decision |\n",
"|---|---|---|---|\n",
"| `pr_merge_time_hours` | 71.7% | *...* | *...* |\n",
"| `commit_message_length` | 7.0% | *...* | *...* |\n",
"| `build_duration_s` | 2.1% | *...* | *...* |\n",
"| `time_to_ci_minutes` | 2.0% | *...* | *...* |\n",
"\n",
"*(Double-click to fill in the table)*\n",
"\n",
"> **Hint for `pr_merge_time_hours`:** Filter D-Tale to show only rows where `pr_merge_time_hours` is NOT null. What values appear in `event_type`? What does this tell you about why it is missing for the other rows?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect missing counts\n",
"missing = df_clean.isnull().sum()\n",
"pct = (missing / len(df_clean) * 100).round(1)\n",
"pd.DataFrame({'missing': missing, '%': pct})[missing > 0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Investigate pr_merge_time_hours — which event types have non-null values?\n",
"print(df_clean.loc[df_clean['pr_merge_time_hours'].notna(), 'event_type'].value_counts())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Apply your decisions from the table above\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.7. Outliers and impossible values\n",
"\n",
"Three issues to address:\n",
"\n",
"**A. `pr_merge_time_hours` — negative values** \n",
"A negative merge time is impossible. Inspect the affected rows and set them to `NaN`. \n",
"Use: boolean mask + `.loc[mask, col] = float('nan')`\n",
"\n",
"**B. `tests_failed > tests_run` — cross-column logical impossibility** \n",
"231 rows have more failed tests than tests were run — physically impossible. This is a new type of issue: it requires checking consistency *between* two columns, not just inspecting one in isolation. \n",
"Inspect the affected rows, then set `tests_failed` to `NaN` for those rows.\n",
"\n",
"**C. `lines_added` and `lines_deleted` — extreme outliers** \n",
"Some commits add or delete thousands of lines — potentially valid (e.g. adding a large library) or a logging error. \n",
"Inspect the affected rows before deciding. Document your threshold choice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# A — Inspect negative pr_merge_time_hours\n",
"neg_mask = df_clean['pr_merge_time_hours'] < 0\n",
"print(f'Negative pr_merge_time_hours: {neg_mask.sum()}')\n",
"print(df_clean.loc[neg_mask, ['event_type', 'pr_merge_time_hours']].head())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix A — set negative values to NaN\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# B — Inspect tests_failed > tests_run\n",
"impossible_mask = df_clean['tests_failed'] > df_clean['tests_run']\n",
"print(f'Rows where tests_failed > tests_run: {impossible_mask.sum()}')\n",
"print(df_clean.loc[impossible_mask, ['tests_run', 'tests_failed']].describe().round(1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix B — set tests_failed to NaN for impossible rows\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# C — Inspect lines_added and lines_deleted outliers\n",
"print('lines_added distribution:')\n",
"print(df_clean['lines_added'].describe().round(1))\n",
"print(f'\\nRows > 1000 lines added: {(df_clean[\"lines_added\"] > 1000).sum()}')\n",
"print(df_clean.loc[df_clean['lines_added'] > 1000, \n",
" ['event_type', 'lines_added', 'lines_deleted', 'dominant_language']].head(8).to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix C — apply your decision on lines_added and lines_deleted outliers\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Your decisions:** What thresholds did you use? What was your reasoning for each?\n",
"\n",
"*(Double-click to write your answers)*\n",
"\n",
"---\n",
"\n",
"### 4.8. **OPTIONAL** `timestamp`: mixed datetime formats \n",
"\n",
"Like Task 2, the `timestamp` column contains mixed datetime formats. However, unlike Task 2, there is no derived column that depends on it — so the impact of unresolved timestamps is lower here.\n",
"\n",
"Apply a first-pass parse with `pd.to_datetime(utc=True, errors='coerce')`. Check how many rows remain unparsed. If you want to go further, apply the `try_formats()` strategy from Task 2's optional section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parse timestamp — first pass\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 5: Verify with D-Tale"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reload D-Tale with the cleaned dataframe\n",
"# Use: dtale.show(df_clean, host='127.0.0.1', subprocess=False, open_browser=False)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check each of the following in D-Tale:\n",
"\n",
"| Column | Expected result |\n",
"|---|---|\n",
"| `is_weekend`, `label_is_high_quality`, `exam_period` | Only `True` / `False` |\n",
"| `is_bot_user` | Only `human` / `bot` |\n",
"| `event_type` | Exactly 7 values, all lowercase |\n",
"| `ci_status` | Exactly 4 values, no `failure`/`FAILED` duplicates |\n",
"| `os` | Exactly 3 values, no `win`/`windows` duplicates |\n",
"| `coverage_percent` | dtype = float64 |\n",
"| `pr_merge_time_hours` | No negative values |\n",
"| `tests_failed` | No values exceeding `tests_run` |\n",
"\n",
"---\n",
"\n",
"## Part 6: Before vs After with SweetViz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate comparison report\n",
"# Exclude timestamp if you converted it (same reason as Task 2)\n",
"# Save to 'sweetviz_git_comparison.html'\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 7: Save"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean.to_csv('dataset_D_git_classroom_activity_clean.csv', index=False)\n",
"print(f'Saved: {len(df_clean)} rows, {len(df_clean.columns)} columns')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Final Questions\n",
"\n",
"Answer the following before finishing:\n",
"\n",
"**1.** The `pr_merge_time_hours` column is missing for 71.7% of rows. Is this a data quality problem? Why or why not?\n",
"\n",
"**2.** You found rows where `tests_failed > tests_run`. What does this kind of cross-column check tell you that a single-column inspection would have missed?\n",
"\n",
"**3.** For `ci_status`, you had to decide whether `failed` and `failure` are the same thing. What kind of knowledge -- beyond the data itself -- did you need to make that decision?\n",
"\n",
"**4.** Compare this dataset to the telemetry dataset from Task 2. Which issues were the same? Which were new? What does that tell you about the generality of the cleaning skills you are building?\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}