Claude Academy
intermediate13 min

Pipe Processing: stdin/stdout Magic

Learning Objectives

  • Use Claude Code in unix pipe chains
  • Process logs, CSV data, and git output
  • Transform file formats with piped Claude
  • Build data pipelines combining unix tools and Claude

Claude as a Unix Citizen

Unix philosophy: each tool does one thing well, and tools compose through pipes. Claude Code fits right into this philosophy via headless mode. It reads from stdin, processes data intelligently, and writes to stdout.

This means you can put Claude in the middle of a pipeline alongside grep, awk, sort, jq, and any other unix tool — using each for what it does best.

The Basic Pattern

data_source | claude -p "instructions" | output_destination

Data flows left to right. Claude receives whatever the previous command outputs, processes it according to your instructions, and passes the result to the next command (or stdout).

Log Analysis

Error Summarization

cat /var/log/app/error.log | claude -p "Summarize the errors in this log. 

Group by error type, count occurrences, and identify the most critical

issues. Output as a markdown table."

Filtering and Understanding

# First grep for the relevant lines, then ask Claude to analyze

grep "TIMEOUT" /var/log/app/api.log | tail -100 | claude -p \

"These are timeout errors from our API. Identify:

1. Which endpoints are timing out most

2. What time patterns exist (is it worse during certain hours?)

3. Likely root causes based on the error details"

Real-Time Monitoring

# Analyze the last 5 minutes of errors

journalctl --since "5 min ago" -u myapp | claude -p \

"Summarize what happened in this service in the last 5 minutes.

Any errors or warnings to be concerned about?"

Git Output Processing

Changelog Generation

git log --oneline -30 | claude -p "Create a changelog from these commits.

Group into: Features, Bug Fixes, Refactoring, Other.

Format as markdown with bullet points. Use past tense."

Diff Analysis

git diff main | claude -p "Summarize these changes in plain language. 

What was added, what was removed, what was modified?

Focus on the 'why' not the 'what'."

Blame Analysis

git log --author="$(git config user.name)" --oneline --since="1 week ago" | \

claude -p "Summarize my work this week for a standup report.

Group by feature/area. Be concise — 3-5 bullet points."

CSV and Data Processing

Data Analysis

cat sales-data.csv | claude -p "Analyze this CSV data:

1. What are the column headers?

2. What's the total revenue?

3. Which product has the highest sales?

4. Any anomalies or outliers?

Output a brief report with the key findings."

Data Transformation

cat users.csv | claude -p "Convert this CSV to JSON. 

Each row should be an object. Use camelCase for keys.

Parse dates as ISO 8601. Output valid JSON."

Data Cleaning

cat messy-data.csv | claude -p "Clean this CSV:
  • Standardize phone numbers to +1XXXXXXXXXX format
  • Fix obvious typos in city names
  • Remove duplicate rows
  • Fill missing email fields with 'N/A'

Output the cleaned CSV."

Format Transformation

Markdown to HTML

cat README.md | claude -p "Convert this Markdown to clean HTML. 

Use semantic tags. Add a simple CSS stylesheet inline."

JSON to YAML

cat config.json | claude -p "Convert this JSON to YAML. 

Preserve comments where the JSON values suggest what they configure."

SQL to Prisma

cat schema.sql | claude -p "Convert this SQL schema to a Prisma schema file. 

Map the types appropriately and add any necessary relations."

Multi-Stage Pipelines

The real power comes from chaining Claude with other tools:

Pre-filter, Then Analyze

# Grep narrows the data, Claude analyzes

grep -i "error\|warning\|critical" app.log | \

tail -200 | \

claude -p "Categorize these log entries by severity and component.

Which components are most problematic?"

Claude in the Middle

# Extract, transform, load

cat raw-data.json | \

jq '.items[]' | \

claude -p "Normalize these items: fix inconsistent naming,

standardize dates, fill missing fields with defaults" | \

jq -s '.' > normalized-data.json

Claude to Claude

# First pass: extract structure, Second pass: enrich

cat codebase-report.txt | \

claude -p "Extract all function names and their files as a JSON array" | \

claude -p "For each function in this JSON, add a one-line description

of what it likely does based on the name. Output as markdown table."

Practical Recipes

Weekly Report Generator

#!/bin/bash

{

echo "## Git Activity"

git log --oneline --since="1 week ago"

echo ""

echo "## Open PRs"

gh pr list --state open

echo ""

echo "## Issues Closed"

gh issue list --state closed --since="1 week ago"

} | claude -p "Format this as a professional weekly engineering report.

Include: summary, key accomplishments, open items, and blockers."

API Response Tester

curl -s https://api.example.com/users | \

claude -p "Analyze this API response:

- Is the JSON well-formed?

- Are all expected fields present?

- Any null values that shouldn't be null?

- Response time implications from the data volume?"

Documentation Freshness Check

find docs -name "*.md" -newer src/services/ -print | \

claude -p "These doc files were modified more recently than the source code.

List them and note which might need updating based on the file names."

Performance Considerations

When using pipes, keep in mind:

  • Token limits: Large inputs (10,000+ lines) may exceed what Claude can process in one shot. Pre-filter with grep, head, or tail.
  • Cost: Each pipe invocation is a separate API call. Batch related processing into single calls when possible.
  • Latency: Headless mode still takes 2-10 seconds per call. For high-volume processing, consider batching.
# Instead of processing files one at a time:

for f in *.ts; do cat "$f" | claude -p "analyze"; done # Slow: N API calls

# Batch them:

cat src/services/*.ts | claude -p "Analyze all these files" # Fast: 1 API call

Key Takeaway

Pipe processing makes Claude a first-class unix citizen. The pattern is simple: command | claude -p "instructions" | next_command. Use it for log analysis, git output processing, data transformation, format conversion, and multi-stage pipelines. Pre-filter large data with grep/awk/jq before sending to Claude, and batch related processing into single calls for efficiency. This gives you the power of AI text processing combined with the composability of the unix toolchain.