
Creating a portfolio as a developer is a flex, we want to showcase our work, but building the actual portfolio site takes time and effort. So I decided to solve that problem in the most cloud-native, cost-efficient, and secure way possible by building a personalized portfolio generator powered entirely by AWS services.
This blog covers the architecture, implementation, security considerations, and deployment of the project.
Concept: A Portfolio From Just Your GitHub
The idea was simple:
What if you could instantly generate a sleek portfolio just by logging in with GitHub, no need to manually input anything?
Users visit the site, click "Generate Portfolio", and authenticate with GitHub. The system then extracts their GitHub data, generates a polished summary using AI, and presents them with a live portfolio site they can share on resumes and CVs.
Frontend: React-Based Dynamic Template
I first created a reusable React portfolio template that dynamically populates user information from GitHub. It’s structured, responsive, and designed to look professional with customizable sections like:
- Bio
- Skills (based on languages used)
- Pinned Projects
- GitHub profile metrics
- Years of experience(based on when the account was created)
- Location
- Phone number
- Number of followes
- Number of public repos
Why GitHub Authentication Was Crucial
Initially, my prototype allowed portfolio generation for any GitHub username just type it in. But I quickly realized this could be abused, someone could generate portfolios for others, impersonate developers, or misuse public data at scale.
To fix this, I implemented OAuth authentication with GitHub. Now users must sign in with GitHub, and I extract their authenticated username automatically. This enforces proper authorization and follows the principle of least privilege.
Architecture Overview: AWS Services Used
Lambda Functions: Core of the Backend Logic
GitHub Data Extraction Function
This Lambda function is the heart of the portfolio generation workflow. It’s triggered when the frontend sends a POST request to an API Gateway endpoint, with the authenticated GitHub username in the request body.
Here’s a breakdown of what it does:
- Receives the GitHub username as payload in a POST request from the frontend after the user authenticates
- Uses PyGitHub to extract user details
- Fetches pinned repos via GitHub GraphQL API
- Computes top 5 languages from public repos
- Generates a summary using AWS Bedrock
- Stores all data in DynamoDB
Snippet:
fromgithubimportGithub,Authimportboto3,os,jsonauth=Auth.Token(os.environ.get('Github_token'))g=Github(auth=auth)user=g.get_user(username)#Fetch profile, repos, languages...#Fetch pinned repos using GraphQL#Generate summary with Bedrock#Save to DynamoDB_
Packaging Note: When deploying this function, I zipped my Python dependencies (like PyGitHub) with my code and uploaded them as a deployment package. AWS Lambda requires all libraries to be bundled if they’re not part of the standard runtime.
To trigger the function, I set up an API Gateway POST endpoint that receives the username from the authenticated session.This design ensures that users don’t have to manually enter their username. Instead, the frontend (after successful GitHub OAuth login) automatically sends the correct username to the backend for processing.
DynamoDB Data Retrieval Function
This Lambda function handles the display side of the portfolio generator. Once a user's data is already extracted and stored in DynamoDB, this function retrieves it and serves it to the frontend.
How it works:
It's triggered when the frontend makes a GET request to the API Gateway endpoint.
The frontend appends the GitHub username as a query parameter (e.g
...?username=sharon-dev
)The Lambda function reads the username from
event['queryStringParameters']
It fetches the corresponding user record from the Portfolio_Data DynamoDB table. The data is returned as a JSON response, ready for the frontend to display
code snippet
deflambda_handler(event,context):username=event.get('queryStringParameters',{}).get('username')response=table.get_item(Key={'Username':username})return{'statusCode':200,'body':json.dumps(response['Item'],default=str)}
Frontend Integration Flow:
On the frontend:
After portfolio generation, a user is redirected to a unique URL likemain.d1ljzwcnoo4d.amplifyapp.com/?username=me-dev
The React app extracts the username from the URL and makes a GET request to the API endpoint:
code snippet
fetch(`https://api-id.execute-api.region.amazonaws.com/prod/user-data?username=${username}`).then(res=>res.json()).then(data=>{// Dynamically inject data into the portfolio template});
The frontend then replaces placeholders in the portfolio template with the fetched JSON including bio, skills, project list, and summary.
Additional Implementation Details:
I enabled CORS (Cross-Origin Resource Sharing) on the API Gateway endpoint to ensure the frontend hosted on Amplify could securely interact with this Lambda function.
This allows public access to view generated portfolios via shareable links without requiring re-authentication.
This design enables links likemain.d1ljzwr7cnoo4d.amplifyapp.com/?username=dev-dev
to display a fully personalized, server-rendered portfolio anywhere, anytime.
Hosting on AWS Amplify: Fast, Scalable, and Developer-Friendly
To complete the stack, I chose AWS Amplify to host the frontend React application and it turned out to be the perfect fit for a serverless, cost-optimized solution like this one.
Why Amplify?
Amplify offers several advantages that made it an ideal choice:
Amplify connects directly to my GitHub repository. Every time I push a change to the main branch, it automatically builds and deploys the updated frontend no manual steps required. This CI/CD setup ensures fast iteration and continuous delivery.
It abstracts away all the infrastructure management. There’s no need to configure EC2, NGINX, or S3 buckets manually — it handles everything from provisioning build environments to deploying across CDN edge locations.
Amplify automatically provisions an SSL certificate and serves content over HTTPS, which is essential for OAuth authentication and user trust.
I was able to configure build-time environment variables directly in the Amplify console for keys like the API base URL — keeping secrets out of the frontend code and simplifying deployment across environments.
All of this was achieved while staying entirely within the AWS Free Tier, making Amplify a cost-effective option for solo developers and students.
Beyond building a functional app, I focused heavily on security best practices, clean code, and environment management to make the project production-ready.
Environment Variables and Secrets Management
To prevent hardcoding sensitive information like API keys and tokens:
I stored all sensitive values (e.g., GitHub OAuth token, API endpoints) in a .env file for local development.
On AWS Amplify, I used the Amplify Console’s environment variables settings to securely inject these at build time, keeping secrets out of the codebase and version control.
This approach ensured clean separation between code and configuration, making the project safer and easier to maintain across environments.
Security Best Practices Followed
Least Privilege OAuth: The GitHub OAuth implementation only requests access to public data no write or private scopes are used, reducing risk in case of token exposure.
No Persistent Credentials: User tokens and sensitive metadata are never stored in any database. Only public GitHub profile information and project metadata are saved.
CORS Headers and API Gateway Hardening: I configured CORS policies carefully to ensure only authorized frontend origins (i.e., the Amplify app) could access the backend APIs.
Validation and Error Handling: Both Lambda functions perform input validation (checking for missing or malformed usernames) and provide structured error responses for better frontend debugging.
Clean Deployment Packages: For AWS Lambda, I ensured all dependencies (like PyGitHub, requests, and boto3) were packaged cleanly and zipped with only the necessary files, minimizing cold start times and reducing package size.
How I Kept Costs at $0.00 in development
One of my goals for this project was to explore real-world cloud development without spending a dime and I’m happy to report that I succeeded. Here's how I kept my entire serverless portfolio generator within the AWS Free Tier.
My strategy was Build Smart, Stay Serverless
Here are the tricks That Helped
Efficient Lambda Invocations: My Lambda functions are lightweight and short-lived designed to run only when triggered and exit quickly to avoid compute costs.
Minimal Bedrock Usage: I limited Bedrock inference to one-time portfolio generation per user. By using Titan Nova Lite, I avoided higher-cost models like Claude or Jurassic and stayed well under usage thresholds.
No Data Egress: Since everything happens within AWS (including Bedrock, DynamoDB, and Lambda), I avoided outbound data transfer charges.
CI/CD Only When Needed: Amplify only runs a build when I push to GitHub, and I kept the frontend optimized to avoid long build times.
Try It Out
Want to see how it works?
👉 Generate your own portfolio here
Top comments(3)

- EducationDidn't finish high school :(
- PronounsNev/Nevo
- WorkOSS Chief @ Gitroom
- Joined
Pretty cool seeing someone actually lock this down without blowing cash on it. Next step, I kinda wanna try it myself just to see where my own setup breaks.

- LocationGhana
- EducationKwame Nkrumah University of Science and Technology
- PronounsShe/Her
- WorkStudent
- Joined
Definitely was!
For further actions, you may consider blocking this person and/orreporting abuse