5 Python Scripts That Save Me 10+ Hours Every Week
From automating boring file tasks to cleaning messy data, these scripts do the heavy lifting so I don’t have to.

Imagine saving an entire workday each week — with just a few Python scripts.
5 Python Scripts That Save Me 10+ Hours Every Week
Let’s be honest — most of us didn’t get into tech because we love repetitive work.
But like many developers, I found myself spending hours each week on small, boring tasks: renaming files, organizing data, responding to the same emails, or digging through logs.
Then I remembered why I learned Python in the first place.
Over the past year, I’ve built a small toolkit of Python scripts that quietly run in the background and give me back 10+ hours every single week. These aren’t fancy machine learning models — just practical, dead-simple automations that anyone can build.
Here are 5 scripts I use weekly that you might want in your toolbox too.
1. Auto-Renamer for Files
Problem: Every week, I download dozens of files — invoices, reports, meeting recordings — with awful names like Meeting123_final_V2_NEW.mp4
.
Solution: A Python script that scans my Downloads folder, renames files using a readable format (meeting-recording_2025-07-19.mp4
), and moves them to the right folders.
import os
import shutil
from datetime import datetime
import re
# === CONFIGURATION ===
DOWNLOADS_FOLDER = os.path.expanduser("~/Downloads")
DESTINATION_FOLDER = os.path.expanduser("~/Documents/Meetings")
KEYWORDS = ["meeting", "recording", "zoom"] # Add more if needed
EXTENSIONS = [".mp4", ".mkv", ".mov"] # File types to match
# === Ensure destination exists ===
os.makedirs(DESTINATION_FOLDER, exist_ok=True)
# === Generate timestamped filename ===
def generate_filename(base="meeting-recording", ext=".mp4"):
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M")
return f"{base}_{timestamp}{ext}"
# === Sanitize filename (optional) ===
def clean_filename(name):
return re.sub(r"[^a-zA-Z0-9_.-]", "_", name)
# === Main Script ===
def auto_rename_and_move():
moved_files = 0
for file in os.listdir(DOWNLOADS_FOLDER):
filepath = os.path.join(DOWNLOADS_FOLDER, file)
# Check if it's a file and matches our filters
if os.path.isfile(filepath) and any(file.lower().endswith(ext) for ext in EXTENSIONS):
if any(keyword in file.lower() for keyword in KEYWORDS):
ext = os.path.splitext(file)[1]
new_name = generate_filename(ext=ext)
new_name = clean_filename(new_name)
dest_path = os.path.join(DESTINATION_FOLDER, new_name)
# Move the file
shutil.move(filepath, dest_path)
print(f"✅ Moved: {file} → {new_name}")
moved_files += 1
if moved_files == 0:
print("ℹ️ No matching files found.")
else:
print(f"🎉 Done! {moved_files} file(s) moved and renamed.")
if __name__ == "__main__":
auto_rename_and_move()
Time saved: ~30 mins/week
Bonus: It’s scheduled using a simplecron
job or Windows Task Scheduler.
2. Inbox Digest for Unread Emails
Problem: I check email way too often — mostly out of FOMO.
Solution: A script that sends me a summary of unread emails (from specific labels or senders) to Telegram or Slack every 2 hours.
import imaplib
import email
from email.header import decode_header
from datetime import datetime
import os
import requests # pip install requests
# === USER CONFIG ===
EMAIL = "your_email@gmail.com"
APP_PASSWORD = "your_app_password" # Use an app-specific password!
IMAP_SERVER = "imap.gmail.com"
DIGEST_LIMIT = 10 # Max number of unread emails to include
# === Optional: Telegram integration ===
USE_TELEGRAM = False
TELEGRAM_TOKEN = "your_bot_token"
TELEGRAM_CHAT_ID = "your_chat_id"
def send_to_telegram(message):
if not USE_TELEGRAM:
return
url = f"https://api.telegram.org/bot{TELEGRAM_TOKEN}/sendMessage"
payload = {"chat_id": TELEGRAM_CHAT_ID, "text": message}
try:
requests.post(url, data=payload)
except Exception as e:
print("❌ Telegram send failed:", e)
def clean_subject(raw_subject):
if isinstance(raw_subject, bytes):
try:
return raw_subject.decode()
except:
return raw_subject.decode("utf-8", errors="ignore")
return raw_subject
def fetch_unread_emails():
try:
# Connect and login
mail = imaplib.IMAP4_SSL(IMAP_SERVER)
mail.login(EMAIL, APP_PASSWORD)
mail.select("inbox")
# Search unread
status, messages = mail.search(None, "UNSEEN")
email_ids = messages[0].split()
if not email_ids:
print("📭 No unread emails.")
send_to_telegram("✅ Inbox is clean! No unread emails.")
return
digest = f"📬 Unread Email Digest ({len(email_ids)} total):\n\n"
for i, eid in enumerate(email_ids[:DIGEST_LIMIT]):
res, msg = mail.fetch(eid, "(RFC822)")
for response in msg:
if isinstance(response, tuple):
msg_obj = email.message_from_bytes(response[1])
subject, _ = decode_header(msg_obj["Subject"])[0]
subject = clean_subject(subject)
from_ = msg_obj.get("From", "Unknown Sender")
date_ = msg_obj.get("Date", "Unknown Date")
digest += f"• *{subject}*\n From: {from_}\n Date: {date_}\n\n"
mail.logout()
print(digest)
send_to_telegram(digest)
except Exception as e:
print("❌ Error:", e)
send_to_telegram(f"❌ Failed to fetch email digest: {e}")
if __name__ == "__main__":
fetch_unread_emails()
Time saved: 1+ hour/week
Why it works: I no longer context-switch every time I see a notification.
3. CSV Cleaner + Formatter
Problem: Clients send me ugly, inconsistent spreadsheets. Some use ;
delimiters. Others have mixed date formats. Every project starts with a cleanup.
Solution: A Python script that reads any .csv
, standardizes column names, fixes dates, and saves a cleaned version.
Step 1:Enable IMAP and Get App Password
- Go to Gmail > Settings > See all settings > Forwarding and POP/IMAP > Enable IMAP
- If you have 2FA enabled (recommended), generate an App Password
Script: inbox_digest.py
import pandas as pd
import os
import re
from tkinter import filedialog, Tk
from datetime import datetime
# === CONFIGURATION ===
OUTPUT_PREFIX = "cleaned_"
DEFAULT_DELIMITERS = [",", ";", "\t", "|"] # Try in this order
def detect_delimiter(file_path):
"""Try to guess the correct delimiter by analyzing the first few lines."""
with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
sample = f.read(1024)
delimiter_scores = {}
for delim in DEFAULT_DELIMITERS:
delimiter_scores[delim] = sample.count(delim)
best = max(delimiter_scores, key=delimiter_scores.get)
return best
def standardize_column_names(columns):
"""Standardize column names: lowercase, replace spaces with underscores, strip weird characters."""
clean = []
for col in columns:
col = col.strip().lower().replace(" ", "_")
col = re.sub(r"[^\w_]", "", col) # remove special characters
clean.append(col)
return clean
def parse_dates(df):
"""Attempt to parse date columns if any."""
for col in df.columns:
if "date" in col or "time" in col:
try:
df[col] = pd.to_datetime(df[col], errors="coerce")
except:
pass
return df
def clean_csv(file_path):
print(f"📂 Processing: {file_path}")
delimiter = detect_delimiter(file_path)
print(f"🔍 Detected delimiter: '{delimiter}'")
try:
df = pd.read_csv(file_path, delimiter=delimiter, engine='python')
# Clean headers
df.columns = standardize_column_names(df.columns)
# Handle dates
df = parse_dates(df)
# Save cleaned file
base = os.path.basename(file_path)
output_name = OUTPUT_PREFIX + base
output_path = os.path.join(os.path.dirname(file_path), output_name)
df.to_csv(output_path, index=False)
print(f"✅ Cleaned CSV saved as: {output_path}")
return output_path
except Exception as e:
print(f"❌ Failed to process {file_path}: {e}")
# === Optional GUI for file selection ===
def select_and_clean_file():
root = Tk()
root.withdraw() # Hide main window
file_path = filedialog.askopenfilename(filetypes=[("CSV files", "*.csv")])
if file_path:
clean_csv(file_path)
if __name__ == "__main__":
# Uncomment this line to enable GUI file picker
# select_and_clean_file()
# Or clean a specific file directly
input_file = "raw_data.csv" # Replace with your CSV file path
clean_csv(input_file)
Time saved: ~2–3 hours/week
Pro tip: Add a drag-and-drop GUI usingtkinter
if you’re sharing with non-technical colleagues.
4. Auto-Responder for FAQs
Problem: People keep asking the same 10 questions on LinkedIn, via email, and even on Slack.
Solution: A Python script that listens for new messages (via Gmail API or Slack API), checks for keyword matches, and sends polite, templated responses.
Before You Start
- Enable Gmail API: https://developers.google.com/gmail/api/quickstart/python
- Install required packages:
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
File Structure (minimal):
auto_responder/
├── credentials.json # Your OAuth 2.0 credentials from Google Cloud Console
├── token.json # Generated after first auth
└── auto_responder.py # Main script
Script: auto_responder.py
import base64
import os.path
import re
from email.mime.text import MIMEText
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
# === FAQ Keyword-Reply Map ===
FAQ_RESPONSES = {
"how to start coding": "Hey! Here's a beginner guide I recommend: https://developer.mozilla.org/en-US/docs/Learn",
"are you hiring": "Thanks for asking! I'm not hiring right now, but here's a great job board: https://remoteok.com/",
"can you mentor me": "I wish I could mentor everyone! Here's a mentoring platform I trust: https://codementor.io/",
"what tech stack do you use": "I mainly work with Python, FastAPI, and PostgreSQL. Let me know if you want to dive deeper!",
}
# === SCOPES ===
SCOPES = ['https://www.googleapis.com/auth/gmail.modify']
def authenticate_gmail():
creds = None
if os.path.exists("token.json"):
creds = Credentials.from_authorized_user_file("token.json", SCOPES)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file("credentials.json", SCOPES)
creds = flow.run_local_server(port=0)
with open("token.json", "w") as token:
token.write(creds.to_json())
return build("gmail", "v1", credentials=creds)
def search_keywords(text):
for keyword in FAQ_RESPONSES:
if keyword in text.lower():
return FAQ_RESPONSES[keyword]
return None
def create_reply(to, thread_id, message_text):
message = MIMEText(message_text)
message['to'] = to
message['subject'] = "Re: Your question"
raw = base64.urlsafe_b64encode(message.as_bytes()).decode()
return {'raw': raw, 'threadId': thread_id}
def auto_reply():
service = authenticate_gmail()
# Get unread messages
results = service.users().messages().list(userId='me', labelIds=['INBOX', 'UNREAD']).execute()
messages = results.get('messages', [])
if not messages:
print("📭 No unread messages.")
return
for msg in messages:
msg_data = service.users().messages().get(userId='me', id=msg['id']).execute()
headers = msg_data['payload']['headers']
subject = next((h['value'] for h in headers if h['name'] == 'Subject'), '')
sender = next((h['value'] for h in headers if h['name'] == 'From'), '')
thread_id = msg_data['threadId']
# Extract plain text snippet
snippet = msg_data.get('snippet', '')
response = search_keywords(subject + " " + snippet)
if response:
print(f"✉️ Found FAQ match from {sender}. Responding...")
reply = create_reply(sender, thread_id, response)
service.users().messages().send(userId='me', body=reply).execute()
# Mark message as read
service.users().messages().modify(
userId='me',
id=msg['id'],
body={'removeLabelIds': ['UNREAD']}
).execute()
else:
print(f"ℹ️ No FAQ match for message from {sender}. Skipping.")
if __name__ == "__main__":
auto_reply()
# Example Output
"""
✉️ Found FAQ match from John Doe <john@example.com>. Responding...
ℹ️ No FAQ match for message from Recruiter <hr@somecompany.com>. Skipping."""
Time saved: 1–2 hours/week
Note: I manually review flagged replies before sending — but the matching and draft-writing is automatic.
5. Daily Journal Organizer
Problem: I write short reflections and notes daily, but I never look at them again because they’re scattered across files.
Solution: A script that organizes entries by date, tags them, and creates a searchable index.
import os
from datetime import datetime
from pathlib import Path
# === CONFIGURATION ===
JOURNAL_DIR = Path.home() / "journal_entries" # You can customize this path
DATE_FORMAT = "%Y-%m-%d"
TIME_FORMAT = "%H:%M"
# === Ensure the journal directory exists ===
def ensure_journal_dir():
JOURNAL_DIR.mkdir(parents=True, exist_ok=True)
# === Generate today's journal file path ===
def get_today_file():
today_str = datetime.now().strftime(DATE_FORMAT)
return JOURNAL_DIR / f"{today_str}.txt"
# === Write a new journal entry with timestamp ===
def write_entry(entry_text: str):
ensure_journal_dir()
file_path = get_today_file()
timestamp = datetime.now().strftime(TIME_FORMAT)
with open(file_path, "a", encoding="utf-8") as f:
f.write(f"[{timestamp}] {entry_text.strip()}\n")
print(f"✅ Entry saved to {file_path}")
# === Optional: View past entries ===
def view_today_entries():
file_path = get_today_file()
if file_path.exists():
with open(file_path, "r", encoding="utf-8") as f:
print("\n📓 Today's Entries:\n" + "-"*40)
print(f.read())
else:
print("📭 No entries yet for today.")
# === Optional: Search all journal files ===
def search_entries(keyword):
print(f"\n🔍 Searching for: '{keyword}'\n" + "-"*40)
for file in sorted(JOURNAL_DIR.glob("*.txt")):
with open(file, "r", encoding="utf-8") as f:
contents = f.read()
if keyword.lower() in contents.lower():
print(f"📁 {file.name}")
for line in contents.splitlines():
if keyword.lower() in line.lower():
print(f" {line.strip()}")
print("-"*40)
# === Main Menu ===
def main():
ensure_journal_dir()
print("\n📘 Daily Journal Organizer")
print("-" * 30)
print("1. Write new entry")
print("2. View today's entries")
print("3. Search journal")
print("4. Exit")
choice = input("Select an option [1–4]: ").strip()
if choice == "1":
entry = input("📝 What's on your mind? \n> ")
write_entry(entry)
elif choice == "2":
view_today_entries()
elif choice == "3":
keyword = input("🔍 Enter keyword to search: ")
search_entries(keyword)
elif choice == "4":
print("👋 Goodbye!")
return
else:
print("⚠️ Invalid choice.")
input("\nPress Enter to continue...")
main()
if __name__ == "__main__":
main()
Time saved: ~1 hour/week
Why I love it: Weekly review takes 10 mins instead of an hour-long scroll through scattered files and apps.
Final Thoughts
Most productivity advice tells you to do more in less time.
But Python lets you do less — because your scripts do it for you.
You don’t need to build some mega automation framework. Just start with small annoyances.
If a task feels boring, repetitive, or error-prone — write a Python script for it. Your future self will thank you.
If you liked this post, follow me for more Python hacks, tools, and automation ideas every week.
Got questions or want one of the full scripts? Drop a comment! Happy to share.