{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "92169b19",
"metadata": {},
"outputs": [],
"source": [
"# 43679 -- Interactive Visualization\n",
"# 2025 - 2026\n",
"# 2nd semester\n",
"# Lab 1 - EDA (independent)\n",
"# ver 1.1\n",
"# 24022026 - Added questions at end; cleaning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lab 01
Task 3: Independent EDA and Cleaning\n",
"\n",
"The purpose of this task is for you to practice EDA for a new dataset in a more independent manner. Feel free to go back to Task 2's code and reuse it, whenever it makes sense. Nevertheless, **don't limit yourself to just copy-pasting** and undersstand why you are applying each step. Understanding what are the issues and how to address them will be important for your final project.\n",
"\n",
"**Dataset:** `dataset_D_git_classroom_activity.csv`\n",
"\n",
"---\n",
"\n",
"### Context\n",
"\n",
"You have been handed an activity log from a Git-based classroom platform. It records **10,000 events** -- commits, pull requests, CI runs, code reviews, and test runs -- generated by students and bots across multiple repositories.\n",
"\n",
"Your goal is to apply the same EDA and cleaning pipeline from Task 2 to this new dataset. This time the guidance is lighter: each section tells you *what* to look for and *which tools and methods to use*, but the code is yours to write.\n",
"\n",
"### Pipeline reminder\n",
"\n",
"| Step | Tool | Goal |\n",
"|---|---|---|\n",
"| 1 — Load and inspect | pandas | Understand structure and inferred types |\n",
"| 2 — Automated profiling | SweetViz | Triage issues across all columns |\n",
"| 3 — Navigate and inspect | D-Tale | See problems with your own eyes |\n",
"| 4 — Clean | pandas | Fix each issue with explicit, reproducible code |\n",
"| 5 — Verify | D-Tale + SweetViz | Confirm fixes landed correctly |\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Part 1: Load and Inspect"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import sweetviz as sv\n",
"import dtale\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv('dataset_D_git_classroom_activity.csv')\n",
"\n",
"# Inspect shape, column types, and first rows\n",
"# Use: df.shape, df.dtypes, df.head()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **What to note:** Which columns were inferred as `object` but should be boolean or numeric? Any column that should be numeric but is `object` almost always signals a formatting problem in the raw values.\n",
"\n",
"---\n",
"\n",
"## Part 2: Automated Profiling with SweetViz\n",
"\n",
"Generate a SweetViz report on the raw dataset. Use it to fill in the triage checklist below before moving on."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate the SweetViz report\n",
"# Use: sv.analyze(df)\n",
"# Save to 'sweetviz_git_raw.html'\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Triage checklist\n",
"\n",
"| Question | Your finding |\n",
"|---|---|\n",
"| Which columns have missing values? Which has the most, and by how much? | *...* |\n",
"| Which columns are shown as TEXT but should be boolean? | *...* |\n",
"| Which columns are shown as TEXT but should be numeric? | *...* |\n",
"| How many distinct values does `event_type` have? Does that seem right? | *...* |\n",
"| What is unusual about `ci_status` distinct values compared to `event_type`? | *...* |\n",
"| Are there numeric columns with suspicious ranges? | *...* |\n",
"\n",
"*(Double-click to fill in your answers)*\n",
"\n",
"---\n",
"\n",
"## Part 3: Navigate and Inspect with D-Tale\n",
"\n",
"Launch D-Tale and use it to confirm each issue visually. Do not clean anything here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Launch D-Tale\n",
"# Use: dtale.show(df, host='127.0.0.1', subprocess=False, open_browser=False)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspection checklist\n",
"\n",
"For each item, use D-Tale's **column header → Describe** to inspect value counts and distribution.\n",
"\n",
"| What to inspect | What you should find |\n",
"|---|---|\n",
"| `is_weekend` unique values | 8 representations of True/False |\n",
"| `event_type` unique values | Many case/whitespace variants of 7 event types |\n",
"| `ci_status` unique values | Case/whitespace variants — but also: are FAILED and FAILURE the same thing? |\n",
"| `os` unique values | WIN, Windows, win — which is the canonical form? |\n",
"| `coverage_percent` raw values | Some use comma as decimal separator |\n",
"| `pr_merge_time_hours` missing % | Very high — is this random or structural? |\n",
"| `tests_failed` vs `tests_run` | Sort `tests_failed` descending — are there rows where it exceeds `tests_run`? |\n",
"| `lines_added` distribution | Any extreme values? |\n",
"| `pr_merge_time_hours` min | Any negative values? |\n",
"| `commit_message_length` min | Any zero values? What would a zero-length commit message mean? |\n",
"\n",
"
\n",
"\n",
"> **Note on `pr_merge_time_hours`:** Think carefully about why this column has so many missing values before deciding what to do. Look at the `event_type` column for rows where it is missing -- does a pattern emerge?\n",
"\n",
"*(Record any additional observations below)*\n",
"\n",
"---\n",
"\n",
"## Part 4: Clean with Pandas\n",
"\n",
"Work through each issue below. For each one: **inspect --> fix --> verify**. \n",
"The first example in each category is more detailed; subsequent columns follow the same pattern.\n",
"\n",
"Start by creating a working copy:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean = df.copy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.1. Boolean columns\n",
"\n",
"**Columns:** `is_weekend`, `label_is_high_quality`, `exam_period` \n",
"**Issue:** 8 different representations of True/False \n",
"**Approach:** `.map()` with an explicit dictionary, same as Task 2 \n",
"\n",
"> **Hint:** Define the `bool_map` dictionary once and reuse it for all three columns. Include both string and boolean keys to make the mapping safe to re-run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(sorted(df_clean['is_weekend'].dropna().unique().tolist()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix is_weekend, label_is_high_quality, exam_period\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — each column should have only True and False, 0 nulls\n",
"for col in ['is_weekend', 'label_is_high_quality', 'exam_period']:\n",
" print(f\"{col}: {df_clean[col].value_counts().to_dict()} | nulls: {df_clean[col].isna().sum()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.2. `is_bot_user`: case and whitespace\n",
"\n",
"**Issue:** 6 variants of 2 values (`Human`, `Bot`) with mixed case and whitespace \n",
"**Approach:** `.str.strip().str.lower()` — no typos, no synonym merging needed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(df_clean['is_bot_user'].value_counts().to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix is_bot_user\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — should show exactly 2 values: 'human' and 'bot'\n",
"print(df_clean['is_bot_user'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.3. Categorical columns: case and whitespace\n",
"\n",
"**Columns:** `dominant_language`, `editor`, `os`, `event_type` \n",
"**Issue:** Many case/whitespace variants — strip and lowercase resolves most \n",
"\n",
"> **Note on `os`:** After stripping and lowercasing you will still have `win` and `windows` as separate values. Decide on a canonical form and merge them with `.replace()`.\n",
"\n",
"> **Note on `event_type`:** After stripping and lowercasing, verify the number of unique values matches the number of distinct event types you expect."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect dominant_language before\n",
"print(f'dominant_language unique before: {df_clean[\"dominant_language\"].nunique()}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix dominant_language — strip and lowercase\n",
"# Your code here\n",
"\n",
"# Apply the same to editor and event_type\n",
"# Your code here\n",
"\n",
"# Fix os — strip, lowercase, then merge win/windows variants\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"for col in ['dominant_language', 'editor', 'os', 'event_type']:\n",
" print(f\"{col} ({df_clean[col].nunique()} unique): {sorted(df_clean[col].dropna().unique().tolist())}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.4. `ci_status`: case, whitespace, and synonym merging\n",
"\n",
"**Issue:** Case and whitespace variants — but also `FAILED` and `FAILURE` represent the same outcome and need to be merged into one canonical value. \n",
"**Approach:** Strip and lowercase first, then use `.replace()` to merge synonyms.\n",
"\n",
"> **Decision to make:** After lowercasing, you will have `failed` and `failure` as separate values. Pick one as the canonical form and justify your choice in a markdown cell below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect\n",
"print(df_clean['ci_status'].value_counts().to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix ci_status — strip, lowercase, then merge synonyms\n",
"# You can use .replace({'current':'replaced'})\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify — should show exactly 4 values: success, failed, cancelled + your merged form\n",
"print(df_clean['ci_status'].value_counts())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Your decision:** Which canonical form did you choose for `failed`/`failure`, and why? This is where you need to go for the domain context. What is the common term?\n",
"\n",
"*(Double-click to write your answer)*\n",
"\n",
"---\n",
"\n",
"### 4.5. `coverage_percent`: comma decimal separator and type conversion\n",
"\n",
"**Issue:** Loaded as `object` — some values use a comma instead of a decimal point \n",
"**Approach:** Same as `purchase_amount` in Task 2 — `.str.replace()` then `.astype(float)`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect — how many rows have a comma?\n",
"print(df_clean['coverage_percent'].dtype)\n",
"comma_rows = df_clean['coverage_percent'].astype(str).str.contains(',', na=False)\n",
"print(f'Rows with comma: {comma_rows.sum()}')\n",
"\n",
"# tip: any values outside the valid range? \n",
"# What is the valid range for this variable?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix coverage_percent\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Verify\n",
"\n",
"print(f'dtype: {df_clean[\"coverage_percent\"].dtype}')\n",
"print(df_clean['coverage_percent'].describe().round(2))\n",
"print(f'\\nValues < 0: {(df_clean[\"coverage_percent\"] < 0).sum()} rows')\n",
"print(f'Values > 100: {(df_clean[\"coverage_percent\"] > 100).sum()} rows')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.6. Missing values: decisions and strategy\n",
"\n",
"This dataset has four columns with missing values. Inspect each one and decide what to do.\n",
"\n",
"| Column | Missing | Your hypothesis for why | Your decision |\n",
"|---|---|---|---|\n",
"| `pr_merge_time_hours` | 71.7% | *...* | *...* |\n",
"| `commit_message_length` | 7.0% | *...* | *...* |\n",
"| `build_duration_s` | 2.1% | *...* | *...* |\n",
"| `time_to_ci_minutes` | 2.0% | *...* | *...* |\n",
"\n",
"*(Double-click to fill in the table)*\n",
"\n",
"> **Hint for `pr_merge_time_hours`:** Filter D-Tale to show only rows where `pr_merge_time_hours` is NOT null. What values appear in `event_type`? What does this tell you about why it is missing for the other rows?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Inspect missing counts\n",
"missing = df_clean.isnull().sum()\n",
"pct = (missing / len(df_clean) * 100).round(1)\n",
"pd.DataFrame({'missing': missing, '%': pct})[missing > 0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Investigate pr_merge_time_hours — which event types have non-null values?\n",
"print(df_clean.loc[df_clean['pr_merge_time_hours'].notna(), 'event_type'].value_counts())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Apply your decisions from the table above\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"### 4.7. Outliers and impossible values\n",
"\n",
"Three issues to address:\n",
"\n",
"**A. `pr_merge_time_hours` — negative values** \n",
"A negative merge time is impossible. Inspect the affected rows and set them to `NaN`. \n",
"Use: boolean mask + `.loc[mask, col] = float('nan')`\n",
"\n",
"**B. `tests_failed > tests_run` — cross-column logical impossibility** \n",
"231 rows have more failed tests than tests were run — physically impossible. This is a new type of issue: it requires checking consistency *between* two columns, not just inspecting one in isolation. \n",
"Inspect the affected rows, then set `tests_failed` to `NaN` for those rows.\n",
"\n",
"**C. `lines_added` and `lines_deleted` — extreme outliers** \n",
"Some commits add or delete thousands of lines — potentially valid (e.g. adding a large library) or a logging error. \n",
"Inspect the affected rows before deciding. Document your threshold choice."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# A — Inspect negative pr_merge_time_hours\n",
"neg_mask = df_clean['pr_merge_time_hours'] < 0\n",
"print(f'Negative pr_merge_time_hours: {neg_mask.sum()}')\n",
"print(df_clean.loc[neg_mask, ['event_type', 'pr_merge_time_hours']].head())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix A — set negative values to NaN\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# B — Inspect tests_failed > tests_run\n",
"impossible_mask = df_clean['tests_failed'] > df_clean['tests_run']\n",
"print(f'Rows where tests_failed > tests_run: {impossible_mask.sum()}')\n",
"print(df_clean.loc[impossible_mask, ['tests_run', 'tests_failed']].describe().round(1))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix B — set tests_failed to NaN for impossible rows\n",
"# Your code here\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# C — Inspect lines_added and lines_deleted outliers\n",
"print('lines_added distribution:')\n",
"print(df_clean['lines_added'].describe().round(1))\n",
"print(f'\\nRows > 1000 lines added: {(df_clean[\"lines_added\"] > 1000).sum()}')\n",
"print(df_clean.loc[df_clean['lines_added'] > 1000, \n",
" ['event_type', 'lines_added', 'lines_deleted', 'dominant_language']].head(8).to_string())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fix C — apply your decision on lines_added and lines_deleted outliers\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **Your decisions:** What thresholds did you use? What was your reasoning for each?\n",
"\n",
"*(Double-click to write your answers)*\n",
"\n",
"---\n",
"\n",
"### 4.8. **OPTIONAL** `timestamp`: mixed datetime formats \n",
"\n",
"Like Task 2, the `timestamp` column contains mixed datetime formats. However, unlike Task 2, there is no derived column that depends on it — so the impact of unresolved timestamps is lower here.\n",
"\n",
"Apply a first-pass parse with `pd.to_datetime(utc=True, errors='coerce')`. Check how many rows remain unparsed. If you want to go further, apply the `try_formats()` strategy from Task 2's optional section."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parse timestamp — first pass\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 5: Verify with D-Tale"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reload D-Tale with the cleaned dataframe\n",
"# Use: dtale.show(df_clean, host='127.0.0.1', subprocess=False, open_browser=False)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Check each of the following in D-Tale:\n",
"\n",
"| Column | Expected result |\n",
"|---|---|\n",
"| `is_weekend`, `label_is_high_quality`, `exam_period` | Only `True` / `False` |\n",
"| `is_bot_user` | Only `human` / `bot` |\n",
"| `event_type` | Exactly 7 values, all lowercase |\n",
"| `ci_status` | Exactly 4 values, no `failure`/`FAILED` duplicates |\n",
"| `os` | Exactly 3 values, no `win`/`windows` duplicates |\n",
"| `coverage_percent` | dtype = float64 |\n",
"| `pr_merge_time_hours` | No negative values |\n",
"| `tests_failed` | No values exceeding `tests_run` |\n",
"\n",
"---\n",
"\n",
"## Part 6: Before vs After with SweetViz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Generate comparison report\n",
"# Exclude timestamp if you converted it (same reason as Task 2)\n",
"# Save to 'sweetviz_git_comparison.html'\n",
"# Your code here\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Part 7: Save"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_clean.to_csv('dataset_D_git_classroom_activity_clean.csv', index=False)\n",
"print(f'Saved: {len(df_clean)} rows, {len(df_clean.columns)} columns')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Final Questions\n",
"\n",
"Answer the following before finishing:\n",
"\n",
"**1.** The `pr_merge_time_hours` column is missing for 71.7% of rows. Is this a data quality problem? Why or why not?\n",
"\n",
"**2.** You found rows where `tests_failed > tests_run`. What does this kind of cross-column check tell you that a single-column inspection would have missed?\n",
"\n",
"**3.** For `ci_status`, you had to decide whether `failed` and `failure` are the same thing. What kind of knowledge -- beyond the data itself -- did you need to make that decision?\n",
"\n",
"**4.** Compare this dataset to the telemetry dataset from Task 2. Which issues were the same? Which were new? What does that tell you about the generality of the cleaning skills you are building?\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}