Logging is an essential part of software development, often overlooked until something goes wrong. In Python, the built-in logging module provides a powerful and flexible framework for tracking events, debugging issues, and monitoring application behavior. However, using it effectively requires more than just sprinkling print() statements or basic log calls throughout your code. This ultimate guide dives into Python logging best practices, offering actionable tips, examples, and strategies to help you implement robust logging in your projects.
Whether you’re a beginner looking to replace print() with proper logging or an experienced developer aiming to optimize your application’s observability, this guide has you covered. Let’s explore how to harness the full potential of Python’s logging module.
Why Logging Matters?
Before diving into best practices, let’s clarify why logging is worth your time. Unlike print() statements, which are temporary and lack context, logging provides a structured way to record what’s happening in your application. It helps you:
- Debug Issues: Pinpoint where and why something failed.
- Monitor Performance: Track execution times and resource usage.
- Audit Actions: Record user activity or system events.
- Understand Behavior: Gain insights into how your application runs in production.
Poor logging practices—like excessive verbosity, missing context, or inconsistent formatting—can make logs useless or even harmful by overwhelming you with noise. Done right, logging becomes a superpower for maintaining and scaling your applications.
1. Use the logging
Module, Not print()
The first step to effective logging is abandoning print()
for the logging
module. While print()
is fine for quick scripts, it lacks the features you need for real-world applications:
- Levels: Logging supports severity levels (DEBUG, INFO, WARNING, ERROR, CRITICAL) to filter messages.
- Formatting: Logs can include timestamps, module names, and more.
- Destinations: Send logs to files, consoles, or remote systems.
Example: Basic Logging Setup
python import logging # Basic configuration logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) logger.info("This is an info message") logger.warning("This is a warning message")
Output:
INFO:__main__:This is an info message
WARNING:__main__:This is a warning message
Best Practice: Always use logging.getLogger(__name__) to create a logger instance. The __name__ variable ensures the logger is named after the module it’s in, making it easier to trace log messages in larger projects.
2. Configure Logging Early
Set up your logging configuration at the start of your application. This ensures all modules use the same settings and prevents unexpected behavior from the default configuration.
Example: Custom Configuration
python import logging logging.basicConfig( level=logging.DEBUG, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", filename="app.log", filemode="w" ) logger = logging.getLogger(__name__) logger.debug("Debugging started")
Output in app.log
:
2025-04-08 10:00:00,123 – __main__ – DEBUG – Debugging started
Best Practice: Use basicConfig()
for simple scripts, but for larger applications, consider a more robust setup with handlers and formatters (covered later).
3. Leverage Logging Levels Appropriately
Python’s logging
module offers five standard levels. Use them wisely:
- DEBUG: Detailed information for diagnosing problems (e.g., variable values).
- INFO: Confirmation that things are working as expected.
- WARNING: An indication of a potential issue (e.g., deprecated feature usage).
- ERROR: A serious problem that prevented a function from completing.
- CRITICAL: A fatal error that may crash the application.
Example: Using Levels
python logger.debug("Variable x = %d", 42) logger.info("User logged in successfully") logger.warning("Configuration file not found, using defaults") logger.error("Database connection failed") logger.critical("System out of memory, shutting down")
Best Practice: Avoid overusing DEBUG in production unless filtered out, as it can clutter logs. Set the appropriate level in production (e.g., INFO or higher) to keep logs manageable.
4. Add Context with Structured Logging
Logs are most useful when they provide context. Include relevant details like user IDs, request IDs, or timestamps to make debugging easier.
Example: Adding Context
python import logging logger = logging.getLogger(__name__) user_id = 12345 logger.info("User %s authenticated", user_id) For more complex scenarios, use the extra parameter or custom formatters: python logger.info("Processing request", extra={"user_id": user_id, "endpoint": "/api/data"})
Best Practice: Use string formatting (%s, .format(), or f-strings) with logger methods to avoid unnecessary string concatenation, which can slow down your code if the log level is disabled.
5. Use Handlers for Flexible Output
Handlers determine where logs go—console, files, network sockets, etc. The default setup uses a StreamHandler
(console), but you can add more.
Example: Multiple Handlers
python import logging logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # Console handler console_handler = logging.StreamHandler() console_handler.setLevel(logging.INFO) # File handler file_handler = logging.FileHandler("debug.log") file_handler.setLevel(logging.DEBUG) # Formatter formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s") console_handler.setFormatter(formatter) file_handler.setFormatter(formatter) # Add handlers to logger logger.addHandler(console_handler) logger.addHandler(file_handler) logger.debug("This goes to the file only") logger.info("This goes to both console and file")
Best Practice: Use separate handlers for different purposes (e.g., errors to a file, info to the console) and set appropriate levels for each.
6. Rotate Logs to Manage Size
In production, logs can grow massive quickly. Use RotatingFileHandler
or TimedRotatingFileHandler
to manage file size or rotate logs based on time.
Example: Rotating Logs
python from logging.handlers import RotatingFileHandler logger = logging.getLogger(__name__) handler = RotatingFileHandler("app.log", maxBytes=2000, backupCount=5) handler.setFormatter(logging.Formatter("%(asctime)s - %(message)s")) logger.addHandler(handler) for i in range(100): logger.info("Log message %d", i) maxBytes=2000: Rotates when the file exceeds 2KB. backupCount=5: Keeps 5 backup files (e.g., app.log.1, app.log.2).
Best Practice: Always enable log rotation in production to prevent disk space issues.
7. Avoid Logging Sensitive Data
Logs often end up in shared systems or third-party tools. Avoid logging sensitive information like passwords, API keys, or personal data.
Example: Masking Sensitive Data
python password = "secret123" logger.debug("User login attempt with password: [MASKED]") # Good logger.debug("User login attempt with password: %s", password) # Bad
Best Practice: Sanitize inputs before logging, or use libraries like python-logging-redaction
to automate redaction.
8. Use Exception Logging
When handling exceptions, log the full stack trace with logger.exception() to capture critical debugging info.
Example: Logging Exceptions
python try: result = 10 / 0 except ZeroDivisionError: logger.exception("An error occurred during division")
Output:
ERROR:__main__:An error occurred during division
Traceback (most recent call last):
File “<stdin>”, line 2, in <module>
ZeroDivisionError: division by zero
Best Practice: Use logger.exception() inside except blocks—it automatically includes the stack trace and sets the level to ERROR.
9. Centralize Logging in Larger Projects
In multi-module applications, centralize your logging configuration in a single place (e.g., a logging_config.py
file) to ensure consistency.
Example: Centralized Config
python # logging_config.py import logging def setup_logging(): logger = logging.getLogger() logger.setLevel(logging.INFO) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("%(asctime)s - %(name)s - %(message)s")) logger.addHandler(handler) # main.py from logging_config import setup_logging setup_logging() logger = logging.getLogger(__name__) logger.info("Application started")
Best Practice: Use a configuration file (e.g., JSON or YAML) with logging.config
for even more flexibility in complex projects.
10. Test Your Logging
Logging is code, and like any code, it should be tested. Ensure your logs work as expected under different conditions.
Example: Testing Logs
python import logging import unittest from io import StringIO class TestLogging(unittest.TestCase): def setUp(self): self.log_output = StringIO() self.handler = logging.StreamHandler(self.log_output) logger = logging.getLogger("test") logger.addHandler(self.handler) logger.setLevel(logging.INFO) self.logger = logger def test_info_log(self): self.logger.info("Test message") self.assertIn("Test message", self.log_output.getvalue()) if __name__ == "__main__": unittest.main()
Best Practice: Mock log handlers in unit tests to verify log output without writing to files or consoles.
11. Optimize Performance
Logging can impact performance if overused. Follow these tips:
- Use Lazy Evaluation: Avoid expensive computations in log messages unless the level is enabled:
python
if logger.isEnabledFor(logging.DEBUG)
:
logger.debug("Expensive calculation: %s", some_costly_function())
- Filter Logs: Set higher levels in production to skip unnecessary processing.
Best Practice: Profile your application to ensure logging isn’t a bottleneck.
12. Integrate with External Tools
For production systems, integrate logging with tools like ELK Stack, Sentry, or CloudWatch. Use JSON formatting for machine-readable logs.
Example: JSON Logging
python import logging import json class JSONFormatter(logging.Formatter): def format(self, record): log_data = { "timestamp": self.formatTime(record), "level": record.levelname, "message": record.msg, "module": record.module } return json.dumps(log_data) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger = logging.getLogger(__name__) logger.addHandler(handler) logger.info("User logged in")
Output:{"timestamp": "2025-04-08 10:00:00", "level": "INFO", "message": "User logged in", "module": "__main__"}
Best Practice: Use structured logging for compatibility with log aggregation tools.
Conclusion
Python’s logging module is a versatile tool that, when used correctly, can transform how you debug, monitor, and maintain your applications. By following these best practices—using appropriate levels, configuring handlers, rotating logs, and avoiding common pitfalls—you’ll create a logging system that’s both powerful and practical. Start small with a basic setup, then scale up with handlers, formatters, and integrations as your project grows.
Logging isn’t just about recording events; it’s about telling the story of your application. Make it a story worth reading.
Hire top-tier Python developers from Carmatec to build scalable, secure, and high-performance applications tailored to your business needs.