<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Docker - Tag - IT Guy Journals</title>
        <link>https://www.itguyjournals.com/tags/docker/</link>
        <description>Docker - Tag - IT Guy Journals</description>
        <generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>luka.krapic@gmail.com (Luka Krapić)</managingEditor>
            <webMaster>luka.krapic@gmail.com (Luka Krapić)</webMaster><lastBuildDate>Fri, 02 May 2025 20:59:24 &#43;0100</lastBuildDate><atom:link href="https://www.itguyjournals.com/tags/docker/" rel="self" type="application/rss+xml" /><item>
    <title>Building a Portable FastAPI Backend for AWS Lambda and ECS Using Terraform</title>
    <link>https://www.itguyjournals.com/building-fastapi-backend-for-aws-lambda-and-ecs-using-terraform/</link>
    <pubDate>Fri, 02 May 2025 20:59:24 &#43;0100</pubDate>
    <author>Luka Krapić</author>
    <guid>https://www.itguyjournals.com/building-fastapi-backend-for-aws-lambda-and-ecs-using-terraform/</guid>
    <description><![CDATA[<p>In the <a href="../building-backend-apis-with-fastapi-on-aws-lambda" rel="">previous post</a>, we explored how to deploy a FastAPI application on AWS Lambda using an ASGI adapter. This is a great option for early-stage projects: it requires zero infrastructure management, supports rapid iteration, and scales automatically.</p>
<p>But as your application matures, Lambda’s trade-offs can become limiting:</p>
<ul>
<li><strong>Cost scaling</strong> with consistent traffic</li>
<li><strong>Compute/memory coupling</strong> and lack of vertical scaling</li>
<li><strong>Package size limits</strong> and cold starts</li>
</ul>
<p>That’s why many teams adopt a container-based workflow that can run on both <strong>Lambda (via container images)</strong> and <strong>ECS Fargate</strong>. With a little planning, you can build once and deploy to either platform with minimal friction.</p>]]></description>
</item>
<item>
    <title>Building An AI Playground With Ollama And Open WebUI: A Hands-On Introduction For Beginners</title>
    <link>https://www.itguyjournals.com/building-an-ai-playground-with-ollama-and-open-webui/</link>
    <pubDate>Sat, 11 May 2024 16:56:47 &#43;0100</pubDate>
    <author>Luka Krapić</author>
    <guid>https://www.itguyjournals.com/building-an-ai-playground-with-ollama-and-open-webui/</guid>
    <description><![CDATA[<p>Large Language Models (LLMs) have been making waves in the field of artificial intelligence (AI) for quite some time, and their popularity continues to soar. These advanced models have the remarkable ability to understand, generate, and respond to human language with unprecedented accuracy and depth. With this surge in interest comes the rise of open source solutions that enable individuals and organizations to host LLMs locally.</p>
<p>In this blog post we will explore how to turn your existing local computer/server into a simple ai server.</p>]]></description>
</item>
</channel>
</rss>
