The shebang (#!) at the top of a script tells the system which interpreter to use. Using an explicit, correct shebang ensures your script runs with the intended shell (e.g., bash, not sh), avoiding unexpected behavior due to differing shell features.
#!/bin/bash for Bash-specific scripts (most common).#!/bin/sh only for POSIX-compliant scripts (limited features, but portable).#!/usr/bin/env bash unless you need to resolve the interpreter via $PATH (e.g., for virtual environments).#!/bin/bash # Good: Explicitly uses Bash
#!/bin/sh # Caution: Uses POSIX shell (no arrays, process substitution, etc.)
#!/bin/env bash # Avoid: Less predictable interpreter resolution
Ambiguous names (e.g., $x, $tmp, process()) make scripts hard to follow. Use descriptive names for variables, functions, and files to clarify intent.
user_count, log_file_path).MAX_RETRIES=3, API_BASE_URL="https://api.example.com").backup_logs, validate_input).config is okay, cfg is not).# Bad: Ambiguous names
x=5
tmp="/tmp/file.txt"
do_stuff() { ... }
# Good: Clear, descriptive names
user_count=5
temporary_log_path="/tmp/app.log"
compress_and_backup_logs() { ... }
By default, Bash ignores errors (e.g., a failed cd won’t stop the script) and tolerates unset variables. This can hide bugs. Use set -euo pipefail to make scripts robust.
-e: Exit immediately if any command fails (non-zero exit code).-u: Treat unset variables as an error and exit.-o pipefail: Exit if any command in a pipeline fails (not just the last one).#!/bin/bash
set -euo pipefail # Enable strict mode
# Without -u: Unset variable $undefined_var would expand to empty string
echo "Hello, $undefined_var" # Fails with "-u": "undefined_var: unbound variable"
# Without -e: Script continues even if "cd" fails
cd /invalid/directory # Fails with "-e": Exits immediately
# Without pipefail: Pipeline returns exit code of last command (grep), hiding "ls" failure
ls /invalid/path | grep "file.txt" # Fails with pipefail: "ls" returns non-zero, script exits
Scripts often fail because inputs are missing, files don’t exist, or required tools (e.g., curl, jq) aren’t installed. Validate these upfront to avoid cryptic errors later.
$#).[ -f "$file" ] (file) or [ -d "$dir" ] (directory).command -v <tool>.#!/bin/bash
set -euo pipefail
# Validate input arguments
if [ $# -ne 1 ]; then
echo "ERROR: Usage: $0 <log_file_path>" >&2 # >&2 redirects to stderr
exit 1
fi
log_file="$1"
# Validate log file exists
if [ ! -f "$log_file" ]; then
echo "ERROR: Log file '$log_file' not found." >&2
exit 1
fi
# Validate dependency (e.g., jq for JSON parsing)
if ! command -v jq &> /dev/null; then
echo "ERROR: 'jq' is required but not installed. Install with: sudo apt install jq" >&2
exit 1
fi
Repeating code leads to duplication and bugs. Use functions to encapsulate logic, improve readability, and enable reuse.
parse_config vs. parse_config_and_send_email).return for exit codes (0 = success) or echo to return strings (capture with result=$(my_function "input")).local to limit scope and avoid polluting the global namespace.#!/bin/bash
set -euo pipefail
# Function to calculate average (returns exit code 0 on success)
calculate_average() {
local sum=0
local count=0
# Validate input: Ensure arguments are numbers
for num in "$@"; do
if ! [[ "$num" =~ ^[0-9]+$ ]]; then
echo "ERROR: '$num' is not a number" >&2
return 1 # Non-zero exit code = failure
fi
sum=$((sum + num))
count=$((count + 1))
done
# Avoid division by zero
if [ "$count" -eq 0 ]; then
echo "ERROR: No numbers provided" >&2
return 1
fi
local average=$((sum / count))
echo "$average" # Return result via stdout
return 0 # Success
}
# Usage
average=$(calculate_average 10 20 30)
echo "Average: $average" # Output: "Average: 20"
Poorly formatted code (inconsistent indentation, long lines) is hard to parse. Adopt consistent formatting rules to improve readability.
\).=, ==, +) and after commas.#!/bin/bash
set -euo pipefail
# Constants
MAX_RETRIES=3
API_URL="https://api.example.com/data"
# Function to fetch data with retries
fetch_data() {
local retries=0
local response_file="/tmp/api_response.json"
while [ "$retries" -lt "$MAX_RETRIES" ]; do
# Use curl with timeout and silent mode
curl -s -w "%{http_code}" -o "$response_file" \
--connect-timeout 10 \
"$API_URL"
# Check if HTTP status is 200 (OK)
if grep -q "200" "$response_file"; then
echo "Successfully fetched data"
return 0
fi
retries=$((retries + 1))
echo "Retry $retries/$MAX_RETRIES..."
sleep 2
done
echo "Failed to fetch data after $MAX_RETRIES retries" >&2
return 1
}
# Main logic
fetch_data
process_response "$response_file" # Assume this function exists
Comments should explain why (intent, assumptions, workarounds) rather than what (the code already shows that). Use them to clarify complex logic, edge cases, or non-obvious decisions.
x=5 # Set x to 5 is unnecessary).#!/bin/bash
set -euo pipefail
# Purpose: Backs up log files to S3, rotating logs older than 7 days.
# Usage: ./backup_logs.sh <log_directory> <s3_bucket>
# Dependencies: aws-cli, gzip
# Author: Jane Doe ([email protected])
# ...
# Compress logs older than 7 days (mtime +7). We use gzip instead of bzip2 here
# because gzip is faster for large log files, and S3 transfer time outweighs
# storage savings of bzip2.
compress_old_logs() {
local log_dir="$1"
find "$log_dir" -name "*.log" -mtime +7 -exec gzip {} \;
}
Global variables (variables defined outside functions) can lead to unintended side effects (e.g., a function modifying a variable used elsewhere). Minimize them by using local variables and passing parameters.
#!/bin/bash
set -euo pipefail
# Bad: Global variable modified by function
count=0
increment_count() {
count=$((count + 1)) # Modifies global variable
}
increment_count
echo "$count" # Output: 1 (works, but risky in large scripts)
# Good: No global variables; pass parameters and return values
increment() {
local num="$1"
echo $((num + 1))
}
current=0
current=$(increment "$current")
echo "$current" # Output: 1 (safer, no side effects)
Scripts should fail loudly but informatively. Use trap for cleanup (e.g., temporary files) and provide actionable error messages.
trap to clean up temporary files on exit (success or failure).cleanup() function to delete temp files, close connections, etc.#!/bin/bash
set -euo pipefail
# Temporary file to clean up
temp_file="/tmp/temp_data.txt"
# Cleanup function: Runs on exit (success, error, or signal)
cleanup() {
if [ -f "$temp_file" ]; then
rm "$temp_file"
echo "Cleaned up temporary file: $temp_file" >&2
fi
}
# Trap EXIT signal to trigger cleanup
trap cleanup EXIT
# Main logic
echo "Creating temporary data..."
echo "some data" > "$temp_file"
# Simulate an error
false # Fails due to "set -e"
echo "This line will never run" # Script exits before here, but cleanup still runs
If targeting Bash (not POSIX sh), leverage modern features to write cleaner, more efficient code. Examples include arrays, parameter expansion, and process substitution.
files=("file1.txt" "file2.txt")).${var,,} (lowercase), ${var:0:3} (substring), ${var:-default} (default value).<(command) to pass output as a “file” (e.g., diff <(sort a.txt) <(sort b.txt)).#!/bin/bash
set -euo pipefail
# Arrays: Store list of log files
log_files=(
"/var/log/auth.log"
"/var/log/syslog"
"/var/log/application.log"
)
# Loop through array
for file in "${log_files[@]}"; do
echo "Processing: $file"
done
# Parameter expansion: Default value if variable is unset
username="${1:-guest}" # Use "guest" if $1 is not provided
echo "Hello, $username"
# Lowercase conversion
uppercase_string="HELLO WORLD"
lowercase_string="${uppercase_string,,}"
echo "$lowercase_string" # Output: "hello world"
Even small scripts benefit from testing. Use linters to catch syntax errors and unit tests to validate behavior.
shellcheck (static analysis tool) to detect bugs and bad practices.shellcheck my_script.shbats (Bash Automated Testing System) to write test cases.calculate_average (from Section 5):
#!/usr/bin/env bats
@test "average of 10, 20, 30 is 20" {
result=$(./my_script.sh calculate_average 10 20 30)
[ "$result" -eq 20 ]
}
@test "fails on non-numeric input" {
run ./my_script.sh calculate_average "abc"
[ "$status" -ne 0 ] # Expect non-zero exit code
}
Treat shell scripts like any other code: track changes in version control and review them with peers to catch issues early.
CHANGELOG.md to track script updates.Writing maintainable shell scripts isn’t about perfection—it’s about consistency and empathy for future readers (including yourself). By following these tips—using strict error handling, descriptive names, functions, and testing—you’ll create scripts that are easier to debug, extend, and collaborate on.
Remember: A script that “works” today but is unmaintainable will cost far more time tomorrow than the extra effort to write it cleanly upfront.