I Know PostgreSQL Now
How AI helped me with psql on Lambda
Note: This is a human-friendly version of the blog. For a more GPT-4o optimized version, see
Remember that scene in The Matrix where Trinity needs to fly a helicopter, and she just asks for an instant download of pilot skills? That's exactly how I felt using AI (specifically Cursor) to figure out PostgreSQL in Lambda containers. What would have been days or weeks of trial and error turned into a few hours of guided exploration.
The Challenge
Recently, I needed to run psql commands in an AWS Lambda container image. Like Trinity's helicopter moment, I had zero experience with container-based Lambda functions or PostgreSQL in containers. But instead of a direct neural download, I had Cursor - my AI copilot that helped me navigate through package managers, repositories, and Amazon Linux 2's quirks.
The best part? Just like Trinity didn't need to spend years in flight school, I didn't need to spend days reading documentation or following outdated tutorials. The AI helped me fail fast, learn from each attempt, and quickly find the working solution.
The Solution (That Actually Works)
After several "I know kung fu" moments of trying PostgreSQL 15 and various package repositories, here's the elegant solution that emerged:
FROM public.ecr.aws/lambda/provided:al2
RUN yum update -y && \
yum install -y amazon-linux-extras && \
amazon-linux-extras install postgresql10 && \
yum install -y jqThe Failed Attempts (So You Don't Have to Try Them)
Like Neo's first jump in the Matrix, these attempts were necessary failures that led to understanding. Each error taught us something valuable:
PostgreSQL 15 Repository Approach:
RUN yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Failed because it required /etc/redhat-release which doesn't exist in AL2.
Using yum-config-manager:
RUN yum install -y yum-utils && \
yum-config-manager --add-repo https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-8-x86_64/Failed with repository conflicts.
The Complete Working Setup
1. Dockerfile.migrations
FROM public.ecr.aws/lambda/provided:al2
# Install PostgreSQL client and jq
RUN yum update -y && \
yum install -y amazon-linux-extras && \
amazon-linux-extras install postgresql10 && \
yum install -y jq
COPY lambda/bootstrap /var/runtime/bootstrap
RUN chmod +x /var/runtime/bootstrap
2. lambda/bootstrap
#!/bin/bash
# Initialize error handling
set -euo pipefail
# Handler function
function handler () {
EVENT_DATA=$1
# Set default values if not provided
DB_USER=${DB_USER:-postgres}
DB_PASSWORD=${DB_PASSWORD:-postgres}
DB_HOST=${DB_HOST:-localhost}
DB_PORT=${DB_PORT:-5432}
# Run simple query
RESPONSE=$(PGPASSWORD="${DB_PASSWORD}" psql -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_USER}" -d postgres -t -c "SELECT NOW();")
echo $RESPONSE
}
# Main loop
while true
do
# Get an event
HEADERS="$(mktemp)"
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function
RESPONSE=$(handler "$EVENT_DATA")
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done3. template.yaml
Resources:
DBSetupFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: ["/var/runtime/bootstrap"]
Environment:
Variables:
DB_HOST: !GetAtt AuroraCluster.Endpoint.Address
DB_PORT: !GetAtt AuroraCluster.Endpoint.Port
VpcConfig:
SecurityGroupIds:
- !Ref LambdaSecurityGroup
SubnetIds: !Ref SubnetIds
Metadata:
Dockerfile: Dockerfile.migrations
DockerContext: .Key Takeaways
Just like The Matrix teaches us that the simplest path is often the best, here's what we learned:
Keep it Simple: PostgreSQL 10 from amazon-linux-extras is all you need
Skip the Latest: Newer PostgreSQL versions cause more trouble than they're worth
VPC is Critical: Don't forget VPC configuration for Aurora access
Environment Variables: Always pass database credentials through environment variables
Quick Start Commands
Build and deploy:
sam build
sam deploy --guidedTest the setup:
aws lambda invoke \
--function-name ${STACK_NAME}-DBSetupFunction-${RANDOM_SUFFIX} \
--payload '{"seed": true}' \
--log-type Tail \
--query 'LogResult' \
--output text response.json | base64 -d && cat response.jsonTroubleshooting Tips
If you can't connect to Aurora, check your VPC and Security Group configuration first
If psql commands fail, ensure your bootstrap script is executable
If builds fail, make sure you're using the AL2 base image
Remember: Sometimes the older, stable version (PostgreSQL 10) is better than fighting with newer versions in a Lambda environment!
Final Thoughts
What amazes me most about this journey is how AI is transforming the way we learn and implement new technologies. A task that would have required extensive experience with containers, Lambda, and PostgreSQL became accessible through AI-assisted development. Like Trinity's instant skill download, we're entering an era where the limiting factor isn't years of experience, but rather our ability to effectively collaborate with AI tools.
The future of development isn't about knowing everything - it's about knowing how to ask the right questions and having the right AI copilot by your side. And just like in The Matrix, the possibilities are limitless once you understand how to tap into this power.
