Skip to content

Commit 25d8eda

Browse files
docs(data_engineer): remove HTML anchor IDs
1 parent 4215370 commit 25d8eda

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

data_engineer/pyspark_sql_complete_guide.ipynb

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"## Getting Started {#getting-started}\n",
7+
"## Getting Started\n",
88
"\n",
99
"First, install PySpark:\n",
1010
"\n",
@@ -68,7 +68,7 @@
6868
"source": [
6969
"The SparkSession is your entry point to all PySpark functionality.\n",
7070
"\n",
71-
"## Creating DataFrames {#creating-dataframes}\n",
71+
"## Creating DataFrames\n",
7272
"\n",
7373
"PySpark supports creating DataFrames from multiple sources including Python objects, pandas DataFrames, files, and databases.\n",
7474
"\n",
@@ -168,7 +168,7 @@
168168
"cell_type": "markdown",
169169
"metadata": {},
170170
"source": [
171-
"## Understanding Lazy Evaluation {#understanding-lazy-evaluation}\n",
171+
"## Understanding Lazy Evaluation\n",
172172
"\n",
173173
"PySpark's execution model differs fundamentally from pandas. Operations are divided into two types.\n",
174174
"\n",
@@ -215,7 +215,7 @@
215215
"\n",
216216
"This lazy evaluation enables Spark's [Catalyst optimizer](https://www.databricks.com/glossary/catalyst-optimizer) to analyze your complete workflow and apply optimizations like predicate pushdown and column pruning before execution.\n",
217217
"\n",
218-
"## Data Exploration {#data-exploration}\n",
218+
"## Data Exploration\n",
219219
"\n",
220220
"Data exploration in PySpark works similarly to pandas, but with methods designed for distributed computing. Instead of pandas' `df.info()` and `df.head()`, PySpark uses `printSchema()` and `show()` to inspect schemas and preview records across the cluster.\n",
221221
"\n",
@@ -335,7 +335,7 @@
335335
"cell_type": "markdown",
336336
"metadata": {},
337337
"source": [
338-
"## Selection & Filtering {#selection-filtering}\n",
338+
"## Selection & Filtering\n",
339339
"\n",
340340
"When selecting and filtering data, PySpark uses explicit methods like `select()` and `filter()` that build distributed execution plans.\n",
341341
"\n",
@@ -411,7 +411,7 @@
411411
"cell_type": "markdown",
412412
"metadata": {},
413413
"source": [
414-
"## Column Operations {#column-operations}\n",
414+
"## Column Operations\n",
415415
"\n",
416416
"Unlike pandas' mutable operations where `df['new_col']` modifies the DataFrame in place, PySpark's `withColumn()` and `withColumnRenamed()` return new DataFrames, maintaining the distributed computing model.\n",
417417
"\n",
@@ -487,7 +487,7 @@
487487
"cell_type": "markdown",
488488
"metadata": {},
489489
"source": [
490-
"## Aggregation Functions {#aggregation-functions}\n",
490+
"## Aggregation Functions\n",
491491
"\n",
492492
"Unlike pandas' in-memory aggregations, PySpark's `groupBy()` and aggregation functions distribute calculations across cluster nodes, using the same conceptual model as pandas but with lazy evaluation.\n",
493493
"\n",
@@ -561,7 +561,7 @@
561561
"cell_type": "markdown",
562562
"metadata": {},
563563
"source": [
564-
"## String Functions {#string-functions}\n",
564+
"## String Functions\n",
565565
"\n",
566566
"Unlike pandas' vectorized string methods accessed via `.str`, PySpark provides importable functions like `concat()`, `split()`, and `regexp_replace()` that transform entire columns across distributed partitions.\n",
567567
"\n",
@@ -642,7 +642,7 @@
642642
"cell_type": "markdown",
643643
"metadata": {},
644644
"source": [
645-
"## Date/Time Functions {#datetime-functions}\n",
645+
"## Date/Time Functions\n",
646646
"\n",
647647
"Working with dates and timestamps is essential for time-based analysis. PySpark offers comprehensive functions to extract date components, format timestamps, and perform temporal operations.\n",
648648
"\n",
@@ -747,7 +747,7 @@
747747
"cell_type": "markdown",
748748
"metadata": {},
749749
"source": [
750-
"## Working with Time Series {#working-with-time-series}\n",
750+
"## Working with Time Series\n",
751751
"\n",
752752
"Time series analysis often requires comparing values across different time periods. PySpark's window functions with lag and lead operations enable calculations of changes and trends over time.\n",
753753
"\n",
@@ -823,7 +823,7 @@
823823
"cell_type": "markdown",
824824
"metadata": {},
825825
"source": [
826-
"## Window Analytics {#window-analytics}\n",
826+
"## Window Analytics\n",
827827
"\n",
828828
"Complex analytics operations like rankings, running totals, and moving averages require window functions that operate within data partitions. These functions enable sophisticated analytical queries without self-joins.\n",
829829
"\n",
@@ -927,7 +927,7 @@
927927
"cell_type": "markdown",
928928
"metadata": {},
929929
"source": [
930-
"## Join Operations {#join-operations}\n",
930+
"## Join Operations\n",
931931
"\n",
932932
"Combining data from multiple tables is a core operation in data analysis. PySpark supports various join types including inner, left, and broadcast joins, with automatic optimization for performance.\n",
933933
"\n",
@@ -1029,7 +1029,7 @@
10291029
"cell_type": "markdown",
10301030
"metadata": {},
10311031
"source": [
1032-
"## SQL Integration {#sql-integration}\n",
1032+
"## SQL Integration\n",
10331033
"\n",
10341034
"PySpark supports standard SQL syntax for querying data. You can write SQL queries using familiar SELECT, JOIN, and WHERE clauses alongside PySpark operations.\n",
10351035
"\n",
@@ -1130,7 +1130,7 @@
11301130
"cell_type": "markdown",
11311131
"metadata": {},
11321132
"source": [
1133-
"## Custom Functions {#custom-functions}\n",
1133+
"## Custom Functions\n",
11341134
"\n",
11351135
"When built-in functions aren't sufficient, custom logic can be implemented using pandas UDFs. These user-defined functions provide vectorized performance through Apache Arrow and support both scalar operations and grouped transformations.\n",
11361136
"\n",
@@ -1195,7 +1195,7 @@
11951195
"cell_type": "markdown",
11961196
"metadata": {},
11971197
"source": [
1198-
"## SQL Expressions {#sql-expressions}\n",
1198+
"## SQL Expressions\n",
11991199
"\n",
12001200
"SQL expressions can be embedded directly within DataFrame operations for complex transformations. The `expr()` and `selectExpr()` functions allow SQL syntax to be used alongside DataFrame methods, providing flexibility in query construction.\n",
12011201
"\n",

0 commit comments

Comments
 (0)