About
Articles by Pablo Alejandro
- Data Engineer con R / Buenos AiresMar 25, 2019
Data Engineer con R / Buenos Aires
Auth0 esta buscando un/a Data Engineer Full Time que le guste transformar y analizar datos para ayudar al resto de la…
2 Comments
Activity
- Pulumi is Hiring!If you are considering a change, please stop by our careers page!https://lnkd.in/dXcJrd_kIf anything seems interesting to you…
Pulumi is Hiring!If you are considering a change, please stop by our careers page!https://lnkd.in/dXcJrd_kIf anything seems interesting to you…
Shared byPablo Alejandro Seibelt
- With all the “AI will replace devs”, it feels like many forgot why they got into programming in the first place.It was neither $$, nor the job.It…
With all the “AI will replace devs”, it feels like many forgot why they got into programming in the first place.It was neither $$, nor the job.It…
Liked byPablo Alejandro Seibelt
Experience & Education
Pulumi
View Pablo Alejandro’s full experience
See their title, tenure and more.
or
By clicking Continue to join or sign in, you agree to LinkedIn’sUser Agreement,Privacy Policy, andCookie Policy.
Licenses & Certifications
Projects
redshiftTools
- Present
Created an R package for importing R data frames into Amazon Redshift
https://github.com/sicarul/redshiftToolsBluelytics
- Present
See projectPagina web hecha en Angular.JS y Django (Python) que reporta los valores del dolar Blue o paralelo de Argentina.
Campañas
- Present
Desde su implementación pasando por una migración a una nueva versión de la herramienta, hemos realizado el mantenimiento y el desarrollo de nuevas funcionalidades. La misma trabaja con una serie de procesos ETL que le disponibilizan la información necesaria para la ejecución de las campañas.
Other creators
Languages
Inglés
Full professional proficiency
Español
Native or bilingual proficiency
Recommendations received
6 people have recommended Pablo Alejandro
Join now to viewView Pablo Alejandro’s full profile
- See who you know in common
- Get introduced
- Contact Pablo Alejandro directly
Other similar profiles
Explore more posts
Deepak Agrawal
Lambda is cheap… until it runs 20 million times per hour.We analyzed 400+ serverless workloads. 60% had NO cost guardrails.Serverless is supposed to be cheap.But in reality it's the fastest way to burn cash invisibly (if you don’t set guardrails from day one.)Here’s what I saw:🚩 No concurrency limits.One bad loop → 3,000 concurrent Lambdas → instant spike → ops team wakes up confused.🚩 Event storms with zero throttling.SNS, SQS, DynamoDB Streams triggering like crazy—no retry logic, no backpressure.🚩 No budget alerts. No anomaly detection.People trust the cloud bill after the fire. Not before.🚩 Over-reliance on default memory configs.2GB Lambdas running a 150MB function, 1000x a minute. Multiply that by 30 regions.🚩 Huge cold start waste.Dev teams chasing speed. But paying for idle spin-ups in traffic patterns they never profiled.I think the worst part is that most teams had no visibility into which invocation patterns were driving 80% of cost.They just assumed “It’s serverless, it must be optimized.”Here’s the framework we now apply to every client before scaling serverless to prod.𝗧𝗵𝗲 𝗦𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗖𝗼𝘀𝘁 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗖𝗵𝗲𝗰𝗸𝗹𝗶𝘀𝘁:✅ Set concurrency caps for all Lambdas✅ Define sane retry/backoff policies on event sources✅ Profile and right-size memory + duration (not guess)✅ Use real-time cost anomaly detection (per function)✅ Tag all workloads with ownership + purpose (for chargeback clarity)Serverless doesn’t need to be expensive.But it will be (if you treat it like a free lunch.)
7 CommentsLior Weinstein
I don't care, as your CTO advisor, if you want to build like Google.I don't care if you read about microservices on Medium.I don't care if your last CTO worked at Amazon.I don't care if "everyone's doing it in 2025."There's one reason you're working with me - because you want the runway to hit your next milestone.That's it.You don't need to apologize for your monolith.You don't need to explain why you're using Heroku.You don't need to send me future architecture diagrams.Let's talk about what actually matters:- Are your customers happy?- Are you spending less than $1,000 on infrastructure?- Can you deploy in under 10 minutes?Because at the end of the day:- We're a startup with 8 customers.- We have $4K in MRR.- And every unnecessary microservice brings us closer to death.So what does it really take to make this work?- Build what customers want.- Keep your total burn low - infra, salaries, everything.- Keep the runway going another month.- And above all - be pragmatic.That's the runway-to-growth playbook.It's not about how many microservices you have -It's about how many months of runway you've got left.That's how you build in 2025.No drama. No ego.Keep the runway long.
14 CommentsElena Verna
Most PLG motions fail not because of the product, but because of the data. Yup. Here is a hot take for you: your (B2B) PLG motion is a data problem, not just a product problem. Sure, your product needs to be self-serve and sell itself - yada yada. But if your data isn’t in order, your PLG motion is dead in the water.Why? Because in PLG, there’s no human hand-holding. You rely entirely on data to understand what’s working.-> Sales needs data to upsell beyond self-serve.-> Product/Growth needs data to build better user experiences.-> Marketing needs data to run lifecycle campaigns effectively.The Problem? Your data stack is likely built for traditional sales.Messy. Manual. Delayed. Inaccessible. Stuck in silos. Yuck.For PLG, here are the bare minimum for data tracking:You need real-time data on: ✅ Demographics (geo, company name/size, customer dept/seniority) ✅ Acquisition (accounts created, new logos) ✅ Activation (oh so important!: set up, aha!, habit loops, feature usage) ✅ Monetization (trials, conversions, package chosen, in/voluntary churn) ✅ Engagement (frequency of engagement, feature use, intensity, volume)If your product and sales teams aren’t actually using this data -> 🔥 you’ve found your next project 🔥And then there are tools. Must-haves are: 🛠 CDP: Segment, Amplitude, Hightouch (for data governance) 📊 Behavioral analytics: Amplitude, Mixpanel, June.so 📂 CRM: Salesforce (if you can wrangle it -although i've personally never seen it done) or newcomers like ClarifyNice-to-haves:🔄 Marketing automation: Customer.io, HubSpot 🧪 A/B Testing: Optimizely, VWO, LaunchDarkly 📈 Lead Scoring: MadkuduAnd honestly, it's less about the tools and more about connecting them properly to enable data access. Get that right, and your PLG motion won’t just survive—it’ll thrive. 🚀I've collaborated with Austin Hay and wrote a post about this data madness - so do check it out for all of the details on data and tools: https://lnkd.in/enzWnBah#growth #plg
38 CommentsTristan Kalos
Last year we found over 4,000 secrets in GraphQL endpoints.This included:- 1,396 access tokens- 49 passwords- 2 credit card numbersAnd thousands of other secrets leaked through stack tracesAll publicly accessible.GraphQL makes it way too easy to expose sensitive data.At BSides SF, we shared a simple takeaway: → Use the GraphQL Secure by Default checklist.Start with basics and then focus on the GraphQL-specific best practices: - Depth query & cost limits- Rate limiting- Aliasing secrets managementAnd if you want to go deeper:- Field-level RBAC- Federation boundary hardening- Continuous DAST (e.g. Escape)- Solid observabilityWant the full security checklist? Drop a “GraphQL” below.
6 CommentsPau Sabria
“Remote doesn’t work.”Yeah, if your team is scattered across 3 timezones, nobody owns decisions, and you manage via Slack threads.We built Olapic ($130M exit) with engineers in Argentina. Full-time, high-trust, same timezone. Now at Remotely, we’ve helped dozens of startups do the same with LatAm talent.Remote works beautifully if you’re not doing it like a moron.You need:- 3-4 hrs of real-time overlap- Local leads who can ship without waiting on HQ- Tight feedback loops (Notion docs ≠ communication)The minute you spread your team across a 9-hour gap and expect speed - you’re outsourcing your roadmap to latency.Remote work isn’t broken. Your ops are.
51 CommentsMatt Watson
If you work in tech, you have to learn to speak in metaphors. Take complicated topics and figure out how to explain them in simpler ways.People can be overly literal and logical.Metaphors also help breakout of that literal language.Technical debt is dirty dishes in the sink.Microservices are like hiding all your toys in different rooms.Communication is a critical skill in product development. Learn to speak in metaphors to convey complex topics.
16 CommentsDave Slutzkin
coding effectiveness plateauing? despite the major version number bump, this only feels like another incremental improvement in my testing.use it for coding! but here's my thinking on where we are:- LLMs will plateau at some point.there aren't infinite gains possible from the current architecture. like any tech it improves super fast until suddenly it doesn't any more, and then it requires a lot more effort to get to the next level.- context windows aren't improving fast.claude 4 has a 200k context window, 0% increase from claude 3 which was released a year ago and had a 200k context window. this is related to the previous point, that the current architecture (with quadratic attention burden) will at some point plateau, and maybe we're already there.- using LLMs for coding is basically brute force.the approach for coding is throw "everything" at the LLM and hope it gives great results. but "everything" isn't everything because LLMs can't handle your whole codebase in a context window, so for each query, the tools guess at the relevant parts of your codebase. but that's hard and they don't do an amazing job of it yet.(the reason this is the approach is because of The Bitter Lesson, which every model provider quotes. it says that throwing more computation at a problem is more likely to get you better and more scalable results than trying to use smart heuristics.)- this means just incremental improvements.so what they're doing is training the models better for coding (or rather, now for software engineering) which is leading to incremental improvements, but nothing as magical as the sudden gpt-3.5 leap which they're still riding.llama 4 was an incremental improvement, gemini 2.5 was an incremental improvement, gpt-4.1 was an incremental improvement. windsurf's swe-1 wasn't even an improvement, but their hope is that the approach they're taking will allow it to shoot past the others with their better training data.- smarts around context management are at a premium.the more work IDEs/CLIs do to pass in the right context, the better your results are. and also the more work you do to scaffold the right documentation.- conclusion?use claude-4 for coding. it's no more expensive and it is better, it's just not infinitely better.but also - CLIs that do the smartest context engineering and management are going to be the winners in the near term, because there's no extra magic coming from the models.
18 CommentsOliver Laslett
Just upgraded to dbt 1.10 and getting flooded with warnings like "Ignore unexpected key meta"? 😅 What if I told you there was a 1-click fix...dbt is changing how meta and tags should be structured: they now need to live under config blocks instead of being top-level properties.But stop! Don't manually update hundreds of YAML files.We've built metamove, a CLI tool that automates this migration while preserving all your comments and formatting.pipx install metamovemetamove models/* seeds/* snapshots/*It safely transforms your files to the new structure, handles nested configs intelligently, and saves to a separate directory by default so you can review changes first.Because nobody has time to manually migrate every schema.yml file in their dbt project 🙃Right now this supports moving the "meta" and "tags" properties, are there other properties you want us to add? Let me know in the comments.#dbt #AnalyticsEngineering #DataEngineering #OpenSource
19 CommentsAdmond Lee
2 mistakes made by Terraform Labs that wiped out $40B crypto market in 3 days.Mistake 1: Built a house of cards (with zero backup plans)Terra’s entire system relied on circular logic — UST’s value depended on LUNA, and LUNA’s value depended on UST staying pegged. It was like two people trying to hold each other up on a sinking raft.When market confidence wobbled, the entire house of cards collapsed. And that’s exactly what happened to Tera and LUNA token.Mistake 2: Offered unsustainable returns to fuel growthThe Anchor Protocol – Terraform's lending platform – offered a mind-boggling 20% APY on UST deposits. But those yields weren’t organic — they were subsidised by Terra’s reserves.This wasn't just generous, it was mathematically unsustainable. The yields weren't backed by genuine economic activity but were essentially subsidised to attract users and capital.The result?Terraform Labs defrauded investors and wiped out the entire crypto market in 3 days.I’m not a crypto bro, but here are my takeaways:• The Terra collapse wasn't just a company failure – it was a systemic meltdown that revealed the fragility of algorithmic stablecoins and triggered a broader crypto crisis. • From $60 billion empire to $0 in 72 hours, Terraform Labs' story is the ultimate cautionary tale of what happens when financial engineering replaces fundamentals, when confidence is confused for value, and when success breeds the kind of hubris that blinds you to obvious risks. • Lastly, only invest in crypto with the money that you can afford to lose. Never bet your savings on something that’s very volatile.Read how Terraform Labs collapsed here (full story): https://lnkd.in/gqUutmyP#startup #crypto
3 CommentsEllis Seder
🤔 CTO salaries at a scaleup: £150K or £300K?Same title. Completely different roles. So, what’s the deal with scaleup CTO salaries in 2025?Job titles often don’t help. Because the same title can mean completely different things at different companies.This means benchmarking CTO salaries is tough, and so it’s confusing for job hunters to know what to expect. 🤑 𝐒𝐨, 𝐥𝐞𝐭’𝐬 𝐛𝐫𝐞𝐚𝐤 𝐝𝐨𝐰𝐧 𝐂𝐓𝐎 𝐬𝐚𝐥𝐚𝐫𝐢𝐞𝐬 𝐢𝐧 𝐋𝐨𝐧𝐝𝐨𝐧: Series A: £150K to £200KLate-stage VC/PE-backed: £175K to £250KEstablished: £275K to £300KB2B Fintech, SaaS and Enterprise AI continue to lead, salary wise. And, of course, US firms pay higher than European ones.Remote vs hybrid WFH – does this change salary expectations? Absolutely! Companies are now paying more for London-based employees due to the high cost of living. But, if you're fully remote in a lower-cost location, expect a different pay scale.𝐍𝐚𝐭𝐮𝐫𝐚𝐥𝐥𝐲, 𝐭𝐞𝐚𝐦 𝐬𝐢𝐳𝐞 𝐯𝐚𝐫𝐢𝐞𝐬 𝐰𝐢𝐥𝐝𝐥𝐲 𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐭𝐡𝐞𝐬𝐞 𝐫𝐨𝐥𝐞𝐬:Series A: Expect 10 to 50 tech staffLate-stage VC/PE-backed: 50+ tech staffEstablished SME: 100+ tech staff𝐄𝐪𝐮𝐢𝐭𝐲, 𝐬𝐡𝐚𝐫𝐞𝐬, 𝐨𝐫 𝐛𝐨𝐧𝐮𝐬 𝐩𝐨𝐭𝐞𝐧𝐭𝐢𝐚𝐥? 💲Ok so this part is interesting, as VC and PE firms are offering potentially huge playouts on a trade sale or event.For series A, expect higher % in equity (1-2%) but obviously higher risk. For late-stage VC/PE-backed, probably the sweet spot, its less % on paper but a life changing upside with a company that is already established and a higher chance of success. And at a listed company, you’ll get stock options that can be sold, (£100K/£200K a year)𝐖𝐡𝐚𝐭’𝐬 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐧𝐞𝐞𝐝𝐞𝐝 𝐢𝐧 𝐞𝐚𝐜𝐡 𝐫𝐨𝐥𝐞?At series A: Hands-on, player-coach role (sorry I know this term annoys CTOs, but its what's requested).- Deeply involved in technical work - Mentoring junior team members - Typically a VP Engineering moving up - Strong technical architecture knowledge required.Late-stage VC/PE-backed: Proven ability to scale software teams - C-suite experience essential - Not hands-on, but focused on team-building and leadership with corporate experience in governance and security.Established company: Manage 100+ tech staff across multiple countries - Influence the Board and Investors - Focus on strategy and large budgets - Experienced in matrix management and larger-scale operationsAdditionally, we’re seeing a shift whereby many companies are hiring seasoned CTOs as part-time advisors or mentors to guide their leadership teams.So the conclusion same title CTO but a huge difference between each company. 🚀 Want to discuss your career path or potential opportunities? 👉 Hiring a CTO - DM Ellis for a confidential chat about executive search or a Q2 2025 market benchmark. #CTO #salaries #fintech
5 CommentsAmartya Jha
Launching: No-BS Engineering MetricsMost engineering dashboards today are built on noise.Only showing no. of Commits. PRs. Comments.Vanity metrics with zero context, and zero connection to real impact.They don’t: - Scan your code.- Know what your developers are actually building.- See the security or quality risks being introduced.That’s why “developer productivity” metrics are BS.We’re changing that.At CodeAnt AI, we’ve rethought engineering metrics from the code up.- Our Quality & Security Platform scans every branch and every commit, catching risks across SAST, IaC, secrets, SCA and more.- Our AI Code Reviews checks every PR for bugs, issues in quality, security, and compliance.Our new Productivity Platform connects the dots✅ From IDE → PR → CI/CD✅ So you understand the true impact of every single developer / agentThis is the future of engineering metrics[STAY TUNED] → We’re launching the Risk Graph next. A live map of every developer and AI agent, showing exactly where risks are born.
3 CommentsJosh Twist
MCP: Beware Big-IdentityIt’s no secret that we’re big fans of API keys at Zuplo; we think they’re especially good for B2B scenarios where the API Consumer is an organization or entity, not a specific individual. The best API companies use this approach for a variety of reasons - folks like Stripe, Twilio, GitHub and a few others we’ll get into later (for B2B and automation).Recently, we launched our Instant-MCP-Server functionality that makes adding an MCP Server to your API a matter of clicks in Zuplo. As part of that work our team has been leaning hard into everything MCP; to understand the future direction, and how we think it will evolve.One of the things we’ve observered, that is a little disappointing, is how hard some of the community participants are driving to require OAuth and complex authentication flows to be a “MUST” have part of the MCP specification. We’ll refer to these actors as “big identity” because its typically one of the giant IDP companies, who shall remain nameless in this post.The big identity folks might even argue that OAuth and API Keys are compatible because you can have an OAuth dance exchange an opaque OAuth token yada yada yada. But nobody wants or needs that complexity in this scenario.The incentives here are plain to see for all, and we hope that common sense prevails, otherwise MCP risks losing the thread and connection with what folks are doing on the ground. Of course, we think it’s important that OAuth is part of the MCP specification as an option; we love OAuth at Zuplo and have native policies for most of the big-identity players, but we also love simplicity and incredible developer experiences — which is why we built the best damn API key support in the industry.The good news is that the major AI players are unlikely to be swayed by this lobbying for a few reasons:* OpenAI’s excellent LLM playground with MCP support natively supports API keys 👏* Almost every LLM API uses… wait for it… API Key authentication 😛.So, what do you think, is big identity driving for complexity purely with its own interests at heart or do they just care deeply about doing the right thing and keeping MCP auth pure, clean and standardized? Should we be driving to make API keys an OpenID standard?Anyway, some other good news is that we made API Key (and OAuth) work seamlessly in Zuplo’s MCP Support so go try it and give us feedback. And if you want to implement API key authentication on your API like the greats (including Anthropic, OpenAI and friends) - look no further than Zuplo’s solution, which has all the dressings like GitHub secret scanning.PS - "Big Identity" not to be confused with my friend Dimitri Sirota's BigID, who are great, because he's an investor.
7 CommentsPallavi Ahuja
5,000 companies have gone through Y Combinator since 2005.Deel is in the Top 10, beating 99% of YC companies. Their $17.3B valuation is staggering.The crazy part? Most people still don't know what they actually do.They're not just a "payroll app." They're a global infrastructure company.While competitors built apps that plug into a patchwork of old, third-party systems, Deel did the "impossible": they built the entire system themselves.They spent 6 years building their own network of 250+ legal entities, country by country.This "boring" work is their real moat. It's an uncopyable, global foundation.And now, they're using that foundation to solve problems at the source.They saw the biggest point of friction for global teams: the financial stress caused by waiting for money you've already earned.So they just deployed their private network to launch 𝐀𝐧𝐲𝐭𝐢𝐦𝐞 𝐏𝐚𝐲.Because they are the payroll engine, they can simply... end the wait. It’s a feature that unlocks your earned pay, instantly.The lesson from YC's Top 10: Don't just build an app. Build the deep, "boring" infrastructure that gives you an unfair advantage.#deelpartner #tech #hr
104 CommentsTimon Zimmermann
"Unlimited" sounds great, until it isn't.In the last few days, Cursor learned this the hard way.The receipts are brutal. Developers who paid $7k for yearly subscriptions watched their teams burn through 500 "unlimited" requests in a single day. Others ran out of requests in 3 days instead of the usual 20-25 days.The timeline tells the story:> Cursor quietly changes their Pro plan from 500 guaranteed requests to "unlimited with rate limits".> Developers start hitting walls after 3 days of usage that normally lasted 20-25 days.> Within hours, Reddit and X explode.But here's what makes this fascinating: it's not about the money.Developers aren't leaving because they can't afford it. They're leaving because the rug got pulled. And in that moment of betrayal, they realized something crucial.The moat is gone. Cursor had many devs on autopilot - they were the first AI editor that actually worked, so they never bothered shopping around. But nothing makes you check out the competition faster than feeling cheated. GitHub Copilot offers the same features. Claude Code just dropped. And even the open-source alternatives - Roo Code, Cline, etc. - are right there, no subscription required.The switching cost? Five minutes to install a new extension. Ten if you're picky about your settings.This isn't just about trust. It's about habit. Developers are creatures of workflow. They'll tolerate bugs, they'll work around limitations, they'll even defend their tools to others - until you break the core promise. Cross that line, and watch how fast your biggest evangelists become your worst critics.Cursor's CEO issued the mea culpa within hours. Full refunds, apologetic blog post. They "missed the mark" with the updated pricing and are "fully refunding affected users".Too late. The habit is broken.In the world of AI coding tools, you're not selling software. You're selling trust, you're selling the thing developers open without thinking twice.Because once developers start typing "Cursor vs..." into Google, you've already lost.
170 CommentsKelly Goetsch
Larger companies fetishize operating metrics - SQLs, MQLs, MAUs, CAC, etc. However, there’s a very fine line between 1) reporting as a byproduct of the work you do, and 2) becoming hyper-fixated on hitting specific metrics and that becoming the only work you do. Companies use operating metrics to measure individual and team performance - they determine your bonuses, promotions, and the respect your peers give you (yes, everyone always compares). These metrics are often crude, incomplete proxies of actual work. Unfortunately, once those metrics are set in comp plans and codified in a company’s culture, hitting them becomes the work and everything else - building up good will with customers, investing in employee career development, burning down tech debt, painful short term product features with positive long-term ROI, doing good in the community, etc. - often gets ignored because it can’t be reported on a spreadsheet. It's like subsisting on vitamins and Soylent - yes you technically hit your macro and micronutrient targets but it's an approximation of food - not actual food. Operating metrics are great. It's my job to define and enforce them. But they have to be done correctly to incentivize the right behavior and company culture, or else they'll quickly hollow out an organization.
11 CommentsDr. Else van der Berg
In my latest Substack article, I break down the real metrics tree I'm using at @Outrig.run (a devtool for Go developers)This metrics tree goes from:👉 (level 1) # 1 business metric 👉 (level 2) # 1 product metric (NSM in our case) 👉 (level 3) primary growth levers 👉 (level 4) acquisition channels 👉 (level 4) retention drivers (breakdown of activation moments)𝗧𝗵𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲: Outrig has 4 core features, and users are finding value in different ways. For some features "value" comes from actively triggering an action, for other features simply spending time on a page could mean value (e.g. looking at a dashboard). Last but not least, we're operating on limited quantitative data (we're pre PMF). I also share my workshop process for building trees with leadership teams (spoiler: start solo, then combine) and how I use AI as a sparring partner.The article includes screenshots of our actual tree + step-by-step breakdown.Link in comments 👇
1 CommentHenry Hund
Here's how to help your SRE team figure out root cause in 5 seconds 👇Maybe that claim sounds outlandish to you, but here's what I mean...On-call engineers are spending too much valuable time:😪 Manually sifting through logs and metrics🤔 Trying to figure out who to ask about which systems (and how they relate to one another)🥴 Escalating to more tenured engineers when they can't figure out the issueEvery second of downtime is costing companies millions of dollars, so it's critical that outages and incidents are resolved as quickly as humanly possible.And luckily there's now a way to augment the HUMAN aspect of that equation:🔎 Let AI find the services and dependencies in your infrastructure📊 Use AI to analyze the relevant observability data it finds🚀 Look over the important metrics - along with a summary of what's happening - just 5 seconds after you receive the alertWe've got a lot more info on our website, which I'll link to in the comments 📌 Shoot me a note if you want to try it out!
18 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content