Python Phase 3: Standard Library & Professional Tools

Python Phase 3: Standard Library & Professional Tools
Welcome to Phase 3 of the Python Learning Roadmap! In this final phase, you'll learn essential standard library modules and professional development tools that will enable you to write production-ready Python applications.
What You'll Learn
✅ Collections module (defaultdict, Counter, deque)
✅ Itertools and functools essentials
✅ Date/time handling with datetime
✅ File operations with pathlib
✅ JSON and CSV processing
✅ Logging best practices
✅ Async programming with asyncio
✅ Package management with Poetry
✅ Testing with pytest
✅ Type checking with mypy
Prerequisites
Before starting this phase, you should have completed:
1. Collections Module
The collections module provides specialized container datatypes beyond the built-in list, dict, set, and tuple.
defaultdict - Dict with Default Values
from collections import defaultdict
# Regular dict - KeyError if key doesn't exist
regular_dict: dict[str, list[str]] = {}
# regular_dict['fruits'].append('apple') # KeyError!
# defaultdict - automatically creates default value
groups: defaultdict[str, list[str]] = defaultdict(list)
groups['fruits'].append('apple')
groups['fruits'].append('banana')
groups['vegetables'].append('carrot')
print(groups) # defaultdict(<class 'list'>, {'fruits': ['apple', 'banana'], 'vegetables': ['carrot']})
# Common use case: counting/grouping
word_count: defaultdict[str, int] = defaultdict(int)
text = "the quick brown fox jumps over the lazy dog"
for word in text.split():
word_count[word] += 1 # No need to check if key exists
print(dict(word_count)) # {'the': 2, 'quick': 1, 'brown': 1, ...}Counter - Count Hashable Objects
from collections import Counter
# Count elements in a list
votes = ['alice', 'bob', 'alice', 'charlie', 'alice', 'bob']
vote_counts = Counter(votes)
print(vote_counts) # Counter({'alice': 3, 'bob': 2, 'charlie': 1})
# Most common elements
print(vote_counts.most_common(2)) # [('alice', 3), ('bob', 2)]
# Mathematical operations
counter1 = Counter(a=3, b=1)
counter2 = Counter(a=1, b=2)
print(counter1 + counter2) # Counter({'a': 4, 'b': 3})
print(counter1 - counter2) # Counter({'a': 2}) # Only positive counts
# Count characters
char_freq = Counter("hello world")
print(char_freq.most_common(3)) # [('l', 3), ('o', 2), ('h', 1)]deque - Double-Ended Queue
from collections import deque
# Efficient queue operations (O(1) on both ends)
queue: deque[str] = deque(['a', 'b', 'c'])
# Add to right (append)
queue.append('d')
print(queue) # deque(['a', 'b', 'c', 'd'])
# Add to left
queue.appendleft('z')
print(queue) # deque(['z', 'a', 'b', 'c', 'd'])
# Remove from right
queue.pop() # 'd'
print(queue) # deque(['z', 'a', 'b', 'c'])
# Remove from left
queue.popleft() # 'z'
print(queue) # deque(['a', 'b', 'c'])
# Rotate (useful for round-robin)
queue.rotate(1) # Move last element to front
print(queue) # deque(['c', 'a', 'b'])
# Fixed-size queue (FIFO with max length)
recent_items: deque[int] = deque(maxlen=3)
for i in range(5):
recent_items.append(i)
print(recent_items) # deque([2, 3, 4], maxlen=3) # Only keeps last 3namedtuple - Tuple with Named Fields
from collections import namedtuple
# Create a simple immutable data structure
Point = namedtuple('Point', ['x', 'y'])
p = Point(10, 20)
print(p.x, p.y) # 10 20
print(p[0], p[1]) # 10 20 # Also accessible by index
# More readable than regular tuples
User = namedtuple('User', ['id', 'username', 'email'])
user = User(1, 'alice', 'alice@example.com')
print(f"{user.username} <{user.email}>") # alice <alice@example.com>
# Convert to dict
print(user._asdict()) # {'id': 1, 'username': 'alice', 'email': 'alice@example.com'}2. Itertools and Functools
Powerful tools for working with iterators and functions.
itertools - Iterator Building Blocks
from itertools import (
count, cycle, repeat,
chain, islice, combinations,
permutations, product, groupby
)
# Infinite iterators
counter = count(start=10, step=2) # 10, 12, 14, 16, ...
print([next(counter) for _ in range(3)]) # [10, 12, 14]
cycler = cycle(['A', 'B', 'C']) # A, B, C, A, B, C, ...
print([next(cycler) for _ in range(5)]) # ['A', 'B', 'C', 'A', 'B']
# chain - combine multiple iterables
combined = chain([1, 2], [3, 4], [5, 6])
print(list(combined)) # [1, 2, 3, 4, 5, 6]
# islice - slice an iterator (memory efficient)
numbers = count() # Infinite
first_10_evens = islice((x for x in numbers if x % 2 == 0), 10)
print(list(first_10_evens)) # [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
# combinations - all possible combinations
items = ['A', 'B', 'C']
print(list(combinations(items, 2))) # [('A', 'B'), ('A', 'C'), ('B', 'C')]
# permutations - all possible orderings
print(list(permutations(['A', 'B'], 2))) # [('A', 'B'), ('B', 'A')]
# product - cartesian product
print(list(product([1, 2], ['x', 'y'])))
# [(1, 'x'), (1, 'y'), (2, 'x'), (2, 'y')]
# groupby - group consecutive elements
data = [
{'name': 'Alice', 'dept': 'Sales'},
{'name': 'Bob', 'dept': 'Sales'},
{'name': 'Charlie', 'dept': 'IT'},
]
data.sort(key=lambda x: x['dept']) # Must be sorted first!
for dept, group in groupby(data, key=lambda x: x['dept']):
members = [person['name'] for person in group]
print(f"{dept}: {', '.join(members)}")
# IT: Charlie
# Sales: Alice, Bobfunctools - Higher-Order Functions
from functools import reduce, partial, lru_cache, wraps
# reduce - accumulate values
from typing import Any
numbers = [1, 2, 3, 4, 5]
total = reduce(lambda acc, x: acc + x, numbers, 0)
print(total) # 15
# partial - create functions with pre-filled arguments
def power(base: int, exponent: int) -> int:
return base ** exponent
square = partial(power, exponent=2)
cube = partial(power, exponent=3)
print(square(5)) # 25
print(cube(3)) # 27
# lru_cache - memoize function results
@lru_cache(maxsize=128)
def fibonacci(n: int) -> int:
"""Cached Fibonacci - dramatically faster for repeated calls."""
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(100)) # Fast even for large numbers
print(fibonacci.cache_info()) # CacheInfo(hits=98, misses=101, ...)
# wraps - preserve metadata in decorators (covered in Phase 2)3. Datetime - Working with Dates and Times
from datetime import datetime, date, time, timedelta, timezone
from zoneinfo import ZoneInfo # Python 3.9+
# Current date and time
now = datetime.now()
today = date.today()
print(now) # 2026-01-26 14:30:00.123456
print(today) # 2026-01-26
# Create specific datetime
birthday = datetime(1990, 5, 15, 14, 30)
print(birthday) # 1990-05-15 14:30:00
# Parse from string
date_str = "2026-01-26"
parsed_date = datetime.strptime(date_str, "%Y-%m-%d")
print(parsed_date) # 2026-01-26 00:00:00
# Format to string
formatted = now.strftime("%B %d, %Y at %I:%M %p")
print(formatted) # January 26, 2026 at 02:30 PM
# Timedelta - duration between dates
tomorrow = today + timedelta(days=1)
next_week = today + timedelta(weeks=1)
one_hour_ago = now - timedelta(hours=1)
print(tomorrow) # 2026-01-27
print(next_week) # 2026-02-02
# Calculate age
def calculate_age(birth_date: date) -> int:
today = date.today()
age = today.year - birth_date.year
if (today.month, today.day) < (birth_date.month, birth_date.day):
age -= 1
return age
print(calculate_age(date(1990, 5, 15))) # e.g., 35
# Timezone-aware datetime
utc_now = datetime.now(timezone.utc)
tokyo_time = utc_now.astimezone(ZoneInfo("Asia/Tokyo"))
ny_time = utc_now.astimezone(ZoneInfo("America/New_York"))
print(f"UTC: {utc_now}")
print(f"Tokyo: {tokyo_time}")
print(f"New York: {ny_time}")4. Pathlib - Modern File Path Operations
from pathlib import Path
# Create Path objects
home = Path.home()
cwd = Path.cwd()
project_root = Path(__file__).parent.parent
print(home) # /Users/username
print(cwd) # /Users/username/projects/myapp
# Join paths (platform-independent)
config_file = project_root / "config" / "settings.json"
print(config_file) # /Users/username/projects/myapp/config/settings.json
# Check if path exists
if config_file.exists():
print("Config file found")
# Check path type
data_dir = Path("data")
print(data_dir.is_dir()) # True/False
print(config_file.is_file()) # True/False
# Create directories
logs_dir = Path("logs")
logs_dir.mkdir(exist_ok=True) # Create if doesn't exist
logs_dir.mkdir(parents=True, exist_ok=True) # Create parent dirs too
# Read/write files
config_path = Path("config.txt")
config_path.write_text("setting=value\n")
content = config_path.read_text()
print(content) # setting=value
# Iterate over directory
for file in Path(".").glob("*.py"):
print(file.name)
# Recursive search
for file in Path(".").rglob("*.json"):
print(file)
# Path parts
full_path = Path("/home/user/projects/app/main.py")
print(full_path.name) # main.py
print(full_path.stem) # main
print(full_path.suffix) # .py
print(full_path.parent) # /home/user/projects/app
print(full_path.parts) # ('/', 'home', 'user', 'projects', 'app', 'main.py')
# File info
if full_path.exists():
stats = full_path.stat()
print(f"Size: {stats.st_size} bytes")
print(f"Modified: {datetime.fromtimestamp(stats.st_mtime)}")5. JSON and CSV Processing
Working with JSON
import json
from pathlib import Path
from typing import Any
# Python dict to JSON string
data = {
"name": "Alice",
"age": 30,
"skills": ["Python", "JavaScript", "SQL"],
"active": True
}
json_string = json.dumps(data, indent=2)
print(json_string)
# {
# "name": "Alice",
# "age": 30,
# ...
# }
# JSON string to Python dict
parsed_data = json.loads(json_string)
print(parsed_data["name"]) # Alice
# Write JSON to file
config_file = Path("config.json")
with config_file.open("w") as f:
json.dump(data, f, indent=2)
# Read JSON from file
with config_file.open("r") as f:
loaded_data = json.load(f)
# Handle custom objects
from dataclasses import dataclass, asdict
from datetime import datetime
@dataclass
class User:
id: int
username: str
created_at: datetime
user = User(1, "alice", datetime.now())
# Convert to dict first, then to JSON
user_dict = asdict(user)
user_dict["created_at"] = user_dict["created_at"].isoformat() # Convert datetime
json_string = json.dumps(user_dict, indent=2)Working with CSV
import csv
from pathlib import Path
# Write CSV
users = [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"},
]
csv_file = Path("users.csv")
with csv_file.open("w", newline="") as f:
fieldnames = ["id", "name", "email"]
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(users)
# Read CSV
with csv_file.open("r") as f:
reader = csv.DictReader(f)
for row in reader:
print(f"{row['name']} <{row['email']}>")
# Alice <alice@example.com>
# Bob <bob@example.com>
# Read as list of lists
with csv_file.open("r") as f:
reader = csv.reader(f)
next(reader) # Skip header
for row in reader:
id, name, email = row
print(f"User {id}: {name}")6. Logging - Better Than Print
import logging
from pathlib import Path
# Configure logging (do this once at app startup)
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('app.log'),
logging.StreamHandler() # Also print to console
]
)
logger = logging.getLogger(__name__)
# Log at different levels
logger.debug("Detailed debug information")
logger.info("General information")
logger.warning("Warning message")
logger.error("Error occurred")
logger.critical("Critical problem")
# Log with variables
user_id = 123
logger.info(f"User {user_id} logged in")
# Log exceptions
try:
result = 10 / 0
except ZeroDivisionError:
logger.exception("Division by zero error") # Includes traceback
# Multiple loggers for different modules
api_logger = logging.getLogger("api")
db_logger = logging.getLogger("database")
api_logger.info("API request received")
db_logger.info("Database query executed")
# Advanced: per-module configuration
class Config:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'detailed': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
},
},
'handlers': {
'file': {
'class': 'logging.FileHandler',
'filename': 'app.log',
'formatter': 'detailed',
},
'console': {
'class': 'logging.StreamHandler',
'formatter': 'detailed',
},
},
'root': {
'level': 'INFO',
'handlers': ['file', 'console']
},
}
# logging.config.dictConfig(Config.LOGGING)7. Asyncio Basics
Asyncio enables concurrent code execution without threading complexity.
Basic async/await
import asyncio
from typing import Any
# Define async function with 'async def'
async def fetch_data(url: str) -> str:
"""Simulate fetching data from a URL."""
print(f"Fetching {url}...")
await asyncio.sleep(2) # Simulate network delay
return f"Data from {url}"
# Run async function
async def main() -> None:
result = await fetch_data("https://api.example.com")
print(result)
# Execute async main
asyncio.run(main())
# Fetching https://api.example.com...
# Data from https://api.example.comConcurrent Execution
async def fetch_multiple() -> None:
"""Fetch multiple URLs concurrently."""
urls = [
"https://api.example.com/users",
"https://api.example.com/posts",
"https://api.example.com/comments",
]
# Run all concurrently
tasks = [fetch_data(url) for url in urls]
results = await asyncio.gather(*tasks)
for result in results:
print(result)
asyncio.run(fetch_multiple())
# All three fetch operations run concurrently (total ~2s, not 6s)Real-World Example: API Client
import asyncio
import aiohttp # pip install aiohttp
from typing import List, Dict
async def fetch_user(session: aiohttp.ClientSession, user_id: int) -> Dict:
"""Fetch single user from API."""
url = f"https://jsonplaceholder.typicode.com/users/{user_id}"
async with session.get(url) as response:
return await response.json()
async def fetch_all_users(user_ids: List[int]) -> List[Dict]:
"""Fetch multiple users concurrently."""
async with aiohttp.ClientSession() as session:
tasks = [fetch_user(session, uid) for uid in user_ids]
return await asyncio.gather(*tasks)
# Usage
async def main() -> None:
users = await fetch_all_users([1, 2, 3, 4, 5])
for user in users:
print(f"{user['name']} - {user['email']}")
asyncio.run(main())For a complete guide to async programming, see the Async Programming Deep Dive.
8. Testing with pytest
Writing tests ensures code reliability and makes refactoring safer.
Basic pytest
# content of test_calculator.py
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
def test_add():
"""Test addition function."""
assert add(2, 3) == 5
assert add(-1, 1) == 0
assert add(0, 0) == 0
def test_add_negative():
"""Test adding negative numbers."""
assert add(-5, -3) == -8Run tests: pytest test_calculator.py
Fixtures for Setup/Teardown
import pytest
from pathlib import Path
@pytest.fixture
def temp_file(tmp_path: Path) -> Path:
"""Create a temporary file for testing."""
file = tmp_path / "test.txt"
file.write_text("test content")
yield file
# Cleanup happens automatically
def test_file_reading(temp_file: Path):
"""Test reading from a file."""
content = temp_file.read_text()
assert content == "test content"Parametrized Tests
import pytest
@pytest.mark.parametrize("input,expected", [
(2, 4),
(3, 9),
(4, 16),
(5, 25),
])
def test_square(input: int, expected: int):
"""Test squaring numbers."""
assert input ** 2 == expectedTesting Exceptions
import pytest
def divide(a: int, b: int) -> float:
"""Divide two numbers."""
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
def test_divide_by_zero():
"""Test that dividing by zero raises an error."""
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)For a complete guide to testing, see the Python Testing Deep Dive.
9. Type Checking with mypy
Static type checking catches bugs before runtime.
Install and Run mypy
pip install mypy
mypy your_script.pyType Checking Example
# good_types.py
from typing import List, Optional
def greet_users(names: List[str]) -> None:
"""Greet a list of users."""
for name in names:
print(f"Hello, {name}!")
def find_user(user_id: int) -> Optional[str]:
"""Find user by ID."""
users = {1: "Alice", 2: "Bob"}
return users.get(user_id)
# mypy will validate types
greet_users(["Alice", "Bob"]) # OK
# greet_users([1, 2, 3]) # Error: List[int] not compatible with List[str]
user = find_user(1)
if user: # mypy knows user could be None
print(user.upper())Configuration (mypy.ini)
[mypy]
python_version = 3.10
warn_return_any = True
warn_unused_configs = True
disallow_untyped_defs = TrueFor a complete guide to type hints, see the Type Hints Deep Dive.
10. Package Management with Poetry
Poetry is a modern dependency management and packaging tool.
Install Poetry
curl -sSL https://install.python-poetry.org | python3 -Create New Project
poetry new my-project
cd my-projectProject structure:
my-project/
├── pyproject.toml # Project config and dependencies
├── README.md
├── my_project/
│ └── __init__.py
└── tests/
└── __init__.pyAdd Dependencies
# Add runtime dependency
poetry add requests
# Add dev dependency
poetry add --group dev pytest
# Add with version constraint
poetry add "fastapi>=0.100.0"Managing Environment
# Install all dependencies
poetry install
# Activate virtual environment
poetry shell
# Run command in venv
poetry run python script.py
poetry run pytestLock File
poetry.lock ensures reproducible installs across all environments.
# Update dependencies
poetry update
# Update specific package
poetry update requestsFor a complete guide to Poetry, see the Package Management Deep Dive.
11. Environment Variables
Manage configuration without hardcoding values.
import os
from pathlib import Path
from typing import Optional
# Read environment variable
api_key = os.getenv("API_KEY")
if not api_key:
raise ValueError("API_KEY environment variable not set")
# With default value
debug_mode = os.getenv("DEBUG", "False") == "True"
port = int(os.getenv("PORT", "8000"))
# Using python-dotenv (pip install python-dotenv)
from dotenv import load_dotenv
# Load from .env file
load_dotenv() # Looks for .env in current directory
# Now access as normal
database_url = os.getenv("DATABASE_URL")
secret_key = os.getenv("SECRET_KEY")
# .env file example:
# DATABASE_URL=postgresql://localhost/mydb
# SECRET_KEY=your-secret-key-here
# DEBUG=True12. Practical Example: Data Processing Pipeline
Let's combine everything into a real-world example:
import asyncio
import logging
from pathlib import Path
from datetime import datetime
from typing import List, Dict
from collections import Counter
import json
# Setup logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class DataProcessor:
"""Process and analyze data from multiple sources."""
def __init__(self, output_dir: Path) -> None:
self.output_dir = output_dir
self.output_dir.mkdir(exist_ok=True)
async def fetch_data(self, source: str) -> List[Dict]:
"""Simulate fetching data from a source."""
logger.info(f"Fetching data from {source}")
await asyncio.sleep(1) # Simulate network delay
return [
{"id": 1, "category": "A", "value": 100},
{"id": 2, "category": "B", "value": 200},
{"id": 3, "category": "A", "value": 150},
]
async def process_sources(self, sources: List[str]) -> List[Dict]:
"""Fetch data from multiple sources concurrently."""
tasks = [self.fetch_data(source) for source in sources]
results = await asyncio.gather(*tasks)
# Flatten results
all_data = []
for result in results:
all_data.extend(result)
logger.info(f"Fetched {len(all_data)} records")
return all_data
def analyze_data(self, data: List[Dict]) -> Dict:
"""Analyze the collected data."""
# Count by category
categories = Counter(item["category"] for item in data)
# Calculate totals
total_value = sum(item["value"] for item in data)
analysis = {
"timestamp": datetime.now().isoformat(),
"total_records": len(data),
"categories": dict(categories),
"total_value": total_value,
"average_value": total_value / len(data) if data else 0
}
logger.info(f"Analysis complete: {len(data)} records processed")
return analysis
def save_results(self, analysis: Dict) -> Path:
"""Save analysis results to JSON file."""
output_file = self.output_dir / f"analysis_{datetime.now():%Y%m%d_%H%M%S}.json"
with output_file.open("w") as f:
json.dump(analysis, f, indent=2)
logger.info(f"Results saved to {output_file}")
return output_file
async def main() -> None:
"""Main pipeline execution."""
processor = DataProcessor(Path("output"))
# Fetch data from multiple sources
sources = ["source1", "source2", "source3"]
data = await processor.process_sources(sources)
# Analyze
analysis = processor.analyze_data(data)
# Save
output_file = processor.save_results(analysis)
print(f"\nPipeline complete!")
print(f"Processed {analysis['total_records']} records")
print(f"Results saved to: {output_file}")
if __name__ == "__main__":
asyncio.run(main())13. Best Practices
✅ Use pathlib for file operations: Modern and cross-platform
✅ Log instead of print: Better debugging and production monitoring
✅ Write tests for critical code: Catch bugs early
✅ Use type hints: Enable static analysis with mypy
✅ Manage dependencies with Poetry: Reproducible environments
✅ Use async for I/O-bound operations: Better performance for network/file operations
✅ Store config in environment variables: Never hardcode secrets
✅ Use collections for specialized needs: defaultdict, Counter, deque
14. Common Pitfalls
❌ Blocking async code: Don't use time.sleep() in async functions (use asyncio.sleep())
❌ Forgetting await: Async functions must be awaited
❌ Not handling timezones: Always use timezone-aware datetimes in production
❌ Ignoring exceptions in logs: Use logger.exception() to include tracebacks
❌ Hardcoding file paths: Use pathlib for cross-platform compatibility
❌ Not using virtual environments: Always isolate project dependencies
# ❌ BAD: Blocking in async code
import time
async def bad_async():
time.sleep(1) # Blocks entire event loop!
# ✅ GOOD: Use async sleep
import asyncio
async def good_async():
await asyncio.sleep(1) # Non-blocking
# ❌ BAD: Timezone-naive datetime
from datetime import datetime
now = datetime.now() # No timezone info
# ✅ GOOD: Timezone-aware
from datetime import datetime, timezone
now = datetime.now(timezone.utc)
# ❌ BAD: Hardcoded paths
file = open("/home/user/data.txt")
# ✅ GOOD: Use pathlib
from pathlib import Path
file_path = Path.home() / "data.txt"
with file_path.open() as f:
data = f.read()Summary and Key Takeaways
In this phase, you learned:
✅ Collections module: Specialized containers (defaultdict, Counter, deque, namedtuple)
✅ Itertools & functools: Powerful iterator and function tools
✅ Datetime & pathlib: Modern date/time and file path handling
✅ JSON & CSV: Data serialization and parsing
✅ Logging: Professional debugging and monitoring
✅ Asyncio: Concurrent programming with async/await
✅ Testing: Write reliable code with pytest
✅ Type checking: Static analysis with mypy
✅ Poetry: Modern package management
✅ Environment variables: Secure configuration management
Practice Exercises
- Build a Web Scraper: Use asyncio + aiohttp to scrape multiple pages concurrently
- Log Analyzer: Parse log files using pathlib and collections, generate statistics
- API Client: Create a typed, tested async API client with error handling
What's Next?
🎉 Congratulations! You've completed the Python Learning Roadmap!
Deep Dive Topics
Continue learning with these advanced topics:
- Type Hints & Typing - Master Python's type system
- Python Decorators - Advanced decorator patterns
- Async Programming - Master asyncio and concurrent programming
- Python Testing with pytest - Comprehensive testing strategies
- Package Management with Poetry - Advanced Poetry workflows
Ready for Web Development?
Now that you know Python, you're ready to build web APIs:
- FastAPI Learning Roadmap - Build modern, fast APIs
Previous: Phase 2: OOP & Advanced Features
Part of the Python Learning Roadmap series
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.