7 ‘Outdated’ Python Practices You Need to Stop Using Right Now

Stop writing Python like it’s 2010 — your code (and your team) deserve better.
7 ‘Outdated’ Python Practices You Need to Stop Using Right Now
If you’re still coding like it’s the early Python 3 days, you might be slowing down your projects, introducing bugs, or confusing your teammates. Here’s what to replace — and why it matters.
Python has aged like fine wine — but some coding habits from its earlier days haven’t.
I’ve reviewed hundreds of Python projects, from tiny scripts to production-scale applications, and I still see developers using patterns that made sense once but are now holding them back. Sometimes it’s for the sake of “backward compatibility,” other times it’s just muscle memory.
The problem? These outdated practices aren’t just “old school.” They can:
- Reduce performance
- Introduce subtle bugs
- Make your code harder to read and maintain
- Confuse newer developers who learned modern Python
Let’s break down 7 outdated Python habits that you should retire immediately — and what to do instead.
1. Using print
for Debugging
Why people still do it
It’s quick — type print(some_var)
and you immediately see output.
No setup required, works everywhere.
Why it’s a problem
Using print()
for debugging is fine for tiny, one-off scripts — but in production or large projects, it becomes a headache because:
- No log levels
You can’t easily distinguish between normal output, warnings, and errors. Everything is just printed as plain text. - No timestamps or metadata
You can’t tell when something happened or from which part of the program it came. - No filtering
If you have hundreds of print statements, you can’t easily filter only the important ones without manually editing code. - Messy code
Developers often forget to removeprint()
calls before pushing to production — which can clutter logs and even leak sensitive data.
Modern Alternative — logging
module
Python’s built-in logging
module solves all these problems by giving you:
Different severity levels (DEBUG
,INFO
,WARNING
,ERROR
,CRITICAL
)
Automatic timestamps
Ability to write logs to files, not just the console
Easy filtering (e.g., only show warnings and errors in production)
Example:
import logging
# Configure logging once at the start of your app
logging.basicConfig(
level=logging.DEBUG, # Minimum level to log
format="%(asctime)s - %(levelname)s - %(message)s"
)
user_data = {"id": 123, "name": "Alice"}
# Debug log
logging.debug("User data fetched: %s", user_data)
# Info log
logging.info("User login successful.")
# Warning log
logging.warning("User profile picture is missing.")
# Error log
logging.error("Failed to save user preferences.")
Output example:
2025-08-11 20:15:32,114 - DEBUG - User data fetched: {'id': 123, 'name': 'Alice'}
2025-08-11 20:15:32,115 - INFO - User login successful.
2025-08-11 20:15:32,116 - WARNING - User profile picture is missing.
2025-08-11 20:15:32,117 - ERROR - Failed to save user preferences.
When print()
is still okay
Small, throwaway scripts
Quick debugging during interactive Python sessions
Learning exercises or tutorials
But in any code that will be maintained, deployed, or worked on by a team — switch to logging.
2. String Formatting with %
or str.format()
Old Ways of Formatting Strings
1. Percent (%
) Formatting
This style comes from C and was widely used in early Python versions:
name = "Alice"
age = 30
print("My name is %s and I am %d years old." % (name, age))
Problems with %
formatting:
Not very readable with multiple variables.
Easy to mismatch placeholders and variables.
Doesn’t support certain modern features like directly embedding expressions.
2. str.format()
Method
Introduced in Python 2.6 and Python 3.0:
print("My name is {} and I am {} years old.".format(name, age))
It’s an improvement over %
, allowing more control and ordering:
print("My name is {0} and I am {1} years old.".format(name, age))
Problems with .format()
:
Still verbose — you have to repeat variable names if used multiple times.
Harder to read compared to modern approaches.
More typing for something that should be simple.
Modern Way — f-Strings (Python 3.6+)
f-strings (formatted string literals) allow variables and expressions to be embedded directly in string templates:
print(f"My name is {name} and I am {age} years old.")
Why f-strings are better:
- Readable — Variables are inline and clearly tied to the output.
- Less error-prone — No need to track indexes or placeholders separately.
- Supports expressions — You can do calculations directly inside braces:
print(f"Next year I’ll be {age + 1} years old.")
4. Faster — f-strings are compiled at runtime and are generally quicker than %
or .format()
.
When to Use f-Strings vs .format()
f-strings → Best choice for most modern Python code (Python 3.6+).
.format()
→ Only needed when working in Python < 3.6.
%
formatting → Avoid unless maintaining legacy code.
If your code is running Python 3.6 or later, there’s no reason to stick to %
or .format()
for new code. f-strings are cleaner, more readable, and faster — a win in every way.
3. Manually Managing File Resources
In early Python code, you often see something like this:
f = open("data.txt", "r")
data = f.read()
f.close()
At first glance, this looks fine — open the file, read from it, close it.
But the problem is you’re responsible for remembering to call close()
every time.
Why This Is Outdated & Risky
- Forgetting to close the file
If you forgetf.close()
, the file stays open in memory. This can lead to:
- Memory leaks
- Too many open file handles (causing “Too many open files” errors)
- File locks not being released properly
2. Errors before close()
If an error occurs between open()
and close()
, the close()
line never runs — leaving the file open.
3. Manual cleanup everywhere
You have to explicitly call close()
every time, which makes the code repetitive and error-prone.
Modern Way — Context Managers (with
Statement)
Python’s with
statement automatically manages file resources.
Example:
with open("data.txt", "r") as f:
data = f.read()
Automatic cleanup — The file is closed as soon as the with
block ends, even if an error happens.
Less code — No need for an explicit close()
call.
Cleaner structure — Keeps the resource’s scope limited to where it’s used.
What Happens Behind the Scenes
When you write:
with open("data.txt") as f:
...
Python does three things automatically:
- Calls
open()
and assigns the file object tof
. - Executes your code inside the
with
block. - Calls
f.close()
when the block finishes — even if an exception is raised.
When Else to Use Context Managers
This pattern isn’t just for files. Any resource that needs explicit cleanup benefits from it:
- Database connections
- Network sockets
- Temporary files
- Thread locks
Example with SQLite:
import sqlite3
with sqlite3.connect("mydb.db") as conn:
cursor = conn.cursor()
cursor.execute("SELECT * FROM users")
No need to manually close the connection — Python does it for you.
If you’re still using open()
without a with
block, you’re relying on your memory to clean up resources. Let Python handle it automatically — it’s safer, cleaner, and less error-prone.
4. Using range(len(...))
for Iteration
You’ll often see beginners write:
fruits = ["apple", "banana", "cherry"]
for i in range(len(fruits)):
print(fruits[i])
It works — but it’s not considered Pythonic.
This style is a carryover from languages like C, C++, and Java, where you typically use an index-based loop to access list elements.
Why This Is Outdated in Python
- Unnecessary complexity
You’re creating arange
object, iterating over indexes, and then doing a second lookupfruits[i]
— all just to get the value. - Harder to read
Someone reading your code has to mentally map:
i
→ indexfruits[i]
→ actual value
3. More chances for bugs
If you mess up the index math (e.g., len(fruits) - 1
), you risk IndexError
.
4. Against Python’s philosophy
Python’s guiding principle is readability and simplicity — “loop over items, not over numbers.”
Modern Pythonic Approach — Direct Iteration
Python lets you iterate directly over items without touching indexes:
for fruit in fruits:
print(fruit)
Benefits:
Shorter and cleaner
More readable
No risk of index mistakes
Works with any iterable (lists, tuples, sets, generators)
When You Actually Need the Index
If you truly need both the index and the value, use enumerate()
:
for i, fruit in enumerate(fruits):
print(i, fruit)
Output:
0 apple
1 banana
2 cherry
Why enumerate()
is better than range(len(...))
:
No manual indexing
Index and value are clearly tied together
Less prone to off-by-one errors
Unless you have a specific reason to iterate over indexes, avoid range(len(...))
. Loop directly over elements for cleaner, more readable, and more maintainable Python code.
5. Writing Loops for Simple List Transformations
A common beginner pattern is:
numbers = [1, 2, 3, 4, 5]
squares = []
for x in numbers:
squares.append(x ** 2)
It works — but it’s verbose for something that’s purely about transforming one list into another.
Why This Is Outdated in Python
- Extra boilerplate
You need to create an empty list, write a loop, and append items — three separate steps for a simple transformation. - Harder to visually parse
Readers must scan several lines to understand a transformation that could be expressed in one. - Against Python’s “expressive code” philosophy
Python encourages readable, concise code — especially for common patterns like filtering and mapping.
Modern Pythonic Approach — List Comprehensions
List comprehensions let you create transformed lists in one clear, compact expression:
squares = [x ** 2 for x in numbers]
Benefits:
Shorter — One line instead of three or four.
Clearer intent — You immediately see: “Take eachx
innumbers
and square it.”
Slightly faster — List comprehensions are optimized in C under the hood.
When to Stick with a Loop
While list comprehensions are great, there are cases where a traditional loop is better:
Complex logic with multiple steps inside the loop
Side effects (e.g., writing to a file, logging) instead of just building a list
Readability concerns — If the comprehension is too long or nested, it can become harder to read.
Adding Conditions (Filtering)
List comprehensions can also filter elements:
even_squares = [x ** 2 for x in numbers if x % 2 == 0]
Reads as: “For each x
in numbers
, if x
is even, square it.”
Transforming Other Iterables
They’re not limited to lists — you can use comprehensions with:
Tuples → via generator expressions
Sets → with set comprehensions
Dictionaries → with dict comprehensions
Example — Dictionary comprehension:
num_map = {x: x ** 2 for x in numbers}
If you’re transforming or filtering data just to build a new list, list comprehensions make your code shorter, cleaner, and often faster. Reserve regular for
loops for multi-step, complex logic.
6. Importing Everything with from module import *
You’ve probably seen this in older tutorials or quick scripts:
from math import *
print(sqrt(16))
It works — you can call sqrt()
without math.
in front — but it comes with hidden dangers.
Why This Is Outdated & Risky
- Namespace Pollution
- When you do
from module import *
, Python dumps every public function, variable, and class from that module into your current namespace. - This can overwrite existing names in your code without you realizing it.
Example:
from module_a import *
from module_b import * # overwrites some_a_function from module_a
2. Harder to Read & Maintain
- When someone else reads your code, they can’t tell where a function came from just by looking at it.
- Example: If they see
load_data()
, is it frompandas
? From your own utils? From somewhere else entirely? They have to search.
3. Debugging Nightmares
- Name collisions can lead to subtle bugs where the “wrong” function is called.
- These bugs are hard to detect because no error is thrown — Python just uses the overwritten function.
4. Poor Tooling Support
- Code editors, linters, and autocomplete tools can’t easily infer which functions exist in your namespace, making your dev experience worse.
Better Alternatives
1. Explicit Imports
Import only what you need:
from math import sqrt, pi
print(sqrt(16), pi)
2. Module Import
Import the whole module and use dot notation:
import math
print(math.sqrt(16), math.pi)
When import *
Is Acceptable
There are very rare, controlled cases:
In interactive sessions (like Jupyter notebooks) where quick access matters.
Inside __init__.py
files when you intentionally want to re-export a public API from submodules.
When working with tkinter
or similar UI libraries where function names are short-lived in a specific scope.
Even then — be cautious, and document what you’re doing.
from module import *
might feel convenient, but it’s a shortcut that costs you clarity, safety, and maintainability. Import only what you need or use full module imports — your future self (and your teammates) will thank you.
7. Using Mutable Default Arguments in Functions
You might have seen (or written) code like this:
def add_item(item, items=[]):
items.append(item)
return items
At first glance, it looks fine — items
defaults to an empty list if nothing is passed.
But here’s the catch:
Python only evaluates default argument values once, at function definition time — not each time the function is called.
Why This Is Dangerous
Let’s run it:
print(add_item("apple")) # ['apple']
print(add_item("banana")) # ['apple', 'banana'] <-- Wait, what?
What happened?
- On the first call,
items
is the same list that was created when the function was defined. - On the second call, instead of creating a new list, Python reuses that same list — because the default
[]
wasn’t re-evaluated.
This means state is shared between calls in a way you probably didn’t intend.
Why It Happens — Behind the Scenes
When Python compiles a function:
- It creates the default argument objects (
[]
in this case) once. - It stores them in the function’s
__defaults__
attribute. - Every time you call the function without providing that argument, Python reuses the same object from
__defaults__
.
So your “empty” list is actually a single persistent object.
The Correct Pattern
The safe and Pythonic way is:
def add_item(item, items=None):
if items is None:
items = []
items.append(item)
return items
Now:
print(add_item("apple")) # ['apple']
print(add_item("banana")) # ['banana'] ✅
Here’s why it works:
None
is immutable, so it’s safe to use as a sentinel value.
Inside the function, you explicitly create a new list if none was provided.
When Mutable Defaults Are Actually OK
There are very rare cases where you want to share state across calls — for example, caching:
def get_cached_result(x, cache={}):
if x not in cache:
cache[x] = expensive_computation(x)
return cache[x]
But in these cases:
You should document it clearly.
You should name it explicitly so others know it’s intentional.
Avoid using mutable objects (like lists, dicts, sets) as default argument values unless you explicitly want persistent shared state. Most of the time, use None
and initialize inside the function.
Conclusion
Modern Python isn’t just about knowing the newest syntax — it’s about writing cleaner, safer, and more maintainable code.
If you still find yourself clinging to old habits, remember:
- Cleaner code is faster to debug
- Modern practices improve collaboration
- Small syntax updates can have huge long-term benefits
Next time you open an old project, look for these outdated patterns. Replace them. Your future self — and your teammates — will thank you.
Write Python for the present, not for the past — your code will live longer than you think.
