<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Azhan's Blog]]></title><description><![CDATA[Azhan's Blog]]></description><link>https://www.azhanali.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 08:41:31 GMT</lastBuildDate><atom:link href="https://www.azhanali.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[A Beginner's Guide to Kubernetes: Exploring the Building Blocks]]></title><description><![CDATA[This article serves as a introduction to Kubernetes (K8s), a powerful open-source Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate the deployment, scaling, and operation of application containers. Originally devel...]]></description><link>https://www.azhanali.com/a-beginners-guide-to-kubernetes-exploring-the-building-blocks</link><guid isPermaLink="true">https://www.azhanali.com/a-beginners-guide-to-kubernetes-exploring-the-building-blocks</guid><category><![CDATA[k8s]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[technology]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Docker]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Mon, 20 May 2024 09:22:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715096701468/cafb32e5-7735-47fb-b5aa-b66a3353652a.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article serves as a introduction to Kubernetes (K8s), a powerful open-source Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate the deployment, scaling, and operation of application containers. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes has become the de facto standard for container orchestration. It allows developers to manage containerized applications in various environments, providing a highly resilient, scalable system for modern application deployment.</p>
<h2 id="heading-why-do-we-need-kubernetes"><strong>Why Do We Need Kubernetes?</strong></h2>
<p>With the rise of micro services managing applications at scale has become increasingly complex. Kubernetes addresses this complexity by providing:</p>
<ul>
<li><p><strong>Automation</strong>: Simplifies the deployment, scaling, and operations of application containers.</p>
</li>
<li><p><strong>Scalability</strong>: Easily scale applications up or down based on demand.</p>
</li>
<li><p><strong>Resilience</strong>: Automatically handles failures, ensuring high availability.</p>
</li>
<li><p><strong>Portability</strong>: Runs on various environments including on-premises, cloud, and hybrid setups.</p>
</li>
</ul>
<h2 id="heading-what-are-the-fundamentals-components-of-kubernetes"><strong>What are the fundamentals components of Kubernetes?</strong></h2>
<h3 id="heading-pods">Pods:</h3>
<blockquote>
<p><em>Pods</em> are the smallest deployable units of computing that you can create and manage in Kubernetes.</p>
</blockquote>
<p>Pods are fundamental to Kubernetes, providing a higher level of abstraction over containers, enabling them to be managed more efficiently in a clustered environment. Pods are ephemeral; they can be created, destroyed, and replaced dynamically as needed by the application. Due to the ephemeral nature of pods, they cannot be accessed via single IP address which makes one ponder how should we communicate with them ?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716188568888/ff95fd7d-e9c5-44b4-9ec2-93d59d9cac0e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-service">Service:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716190464236/f10e97d6-7ac8-43d3-a93c-3e295cc614f5.png" alt class="image--center mx-auto" /></p>
<p>A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy by which to access them. Services enable communication between different components of an application without requiring clients to track the dynamic changes in Pod IP addresses. This is crucial for maintaining a stable interface for applications to communicate within and outside the cluster.</p>
<h3 id="heading-key-characteristics-of-a-kubernetes-service"><strong>Key Characteristics of a Kubernetes Service:</strong></h3>
<ol>
<li><p><strong>Permanent IP Address</strong>:</p>
<ul>
<li><p>A Service provides a stable IP address that remains constant regardless of changes in the underlying Pods.</p>
</li>
<li><p>This IP address is often referred to as the "ClusterIP."</p>
</li>
</ul>
</li>
<li><p><strong>Decoupled Lifecycle</strong>:</p>
<ul>
<li><p>The lifecycle of a Service is independent of the Pods it routes traffic to.</p>
</li>
<li><p>If a Pod dies and is replaced by a new one, the Service’s IP remains unchanged, ensuring consistent access.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-types-of-kubernetes-services">Types of Kubernetes Services:</h3>
<p><strong>ClusterIP (Internal Service)</strong>:</p>
<ul>
<li><p><strong>Definition</strong>: The default type of Service, providing an internal IP address accessible only within the cluster.</p>
</li>
<li><p><strong>Use Case</strong>: Ideal for internal communication between different microservices, such as a backend service communicating with a database.</p>
</li>
</ul>
<p><strong>NodePort (External Service)</strong>:</p>
<ul>
<li><p><strong>Definition</strong>: Exposes the Service on each Node’s IP at a static port (the NodePort). This allows external traffic to access the Service.</p>
</li>
<li><p><strong>Use Case</strong>: Useful for exposing applications to the outside world for direct access.</p>
</li>
</ul>
<h3 id="heading-ingress">Ingress:</h3>
<p>Kubernetes Ingress is a powerful API object that manages external access to services within a cluster, typically using HTTP and HTTPS. Ingress provides a way to define rules for routing traffic to the appropriate services based on the request's host and path. It helps in presenting a more user-friendly URL structure and handling SSL termination.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716191066983/4267b2c4-bea7-487c-a002-95126df2048e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-key-characteristics-of-k8s-ingress">Key Characteristics of K8s Ingress</h3>
<ol>
<li><p><strong>User-Friendly URLs</strong>:</p>
<ul>
<li><p><strong>Example</strong>: Instead of accessing your application via an IP address and port like <a target="_blank" href="http://124.91.105.3:8080"><code>http://124.91.105.3:8080</code></a>, you can use a more practical and user-friendly URL like <a target="_blank" href="https://my-app.com"><code>https://application.com</code></a>.</p>
</li>
<li><p><strong>Functionality</strong>: Ingress maps these friendly URLs to the appropriate backend services.</p>
</li>
</ul>
</li>
<li><p><strong>Request Routing</strong>:</p>
<ul>
<li><p><strong>Process</strong>: The request first comes to the Ingress, and the Ingress controller forwards it to the relevant service based on defined rules.</p>
</li>
<li><p>Example: Request -&gt; Ingress -&gt; Service</p>
</li>
</ul>
</li>
<li><p><strong>SSL/TLS Termination</strong>:</p>
<ul>
<li>Handles SSL termination, allowing you to use HTTPS without needing each service to manage its own certificates.</li>
</ul>
</li>
</ol>
<h3 id="heading-config-maps-and-secrets">Config Maps and Secrets:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716191740322/fc15951d-cd65-440c-8970-1a19b29af862.png" alt class="image--center mx-auto" /></p>
<p><strong>What is Config Map:</strong></p>
<p>A ConfigMap is a Kubernetes object used to store non-sensitive configuration data in key-value pairs. It allows you to decouple configuration artifacts from container images, making applications more portable and easier to manage. This way, you can update the configuration without needing to rebuild and redeploy your container images.</p>
<h4 id="heading-key-characteristics-of-configmap"><strong>Key Characteristics of ConfigMap:</strong></h4>
<ol>
<li><p><strong>External Configuration</strong>:</p>
<ul>
<li><p>Stores configuration data such as database connection strings, feature flags, or external service URLs.</p>
</li>
<li><p>The configuration can be updated without altering the container image.</p>
</li>
</ul>
</li>
</ol>
<p><strong>What are Secrets: -</strong></p>
<p>A Secret is similar to a ConfigMap but is specifically designed to store sensitive information such as passwords, OAuth tokens, and SSH keys. Secrets ensure that sensitive data is handled more securely and is not exposed directly in the Pod definition or source code.</p>
<h4 id="heading-key-characteristics-of-secret"><strong>Key Characteristics of Secret:</strong></h4>
<ol>
<li><p><strong>Sensitive Data Handling</strong>:</p>
<ul>
<li><p>Secrets are encoded (base64) and stored securely.</p>
</li>
<li><p>Access to Secrets can be tightly controlled using Kubernetes RBAC (Role-Based Access Control).</p>
</li>
</ul>
</li>
</ol>
<p>Example: Storing DB credentials</p>
<h3 id="heading-kubernetes-volumes"><strong>Kubernetes Volumes</strong>:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716192464175/639f2908-654e-4263-a4c2-9be19f6173c5.png" alt class="image--center mx-auto" /></p>
<p>A Kubernetes Volume is a directory that is accessible to containers in a Pod, used to store data persistently across the Pod's lifecycle. Unlike the ephemeral storage provided by containers, which is lost when the container is terminated, Volumes ensure that data remains available even if a Pod dies and is recreated.</p>
<h4 id="heading-key-characteristics-of-kubernetes-volumes"><strong>Key Characteristics of Kubernetes Volumes:</strong></h4>
<ol>
<li><p><strong>Persistence</strong>:</p>
<ul>
<li><p>Data stored in a Volume is preserved across Pod restarts.</p>
</li>
<li><p>Ensures that critical application data, such as database files, are not lost when containers are restarted or moved.</p>
</li>
</ul>
</li>
<li><p><strong>Local vs. Remote Storage</strong>:</p>
<ul>
<li><p><strong>Local Storage</strong>: The storage is present inside the K8s cluster (e.g., a hard drive in the K8s cluster).</p>
</li>
<li><p><strong>Remote Storage</strong>: The storage is provided by a remote service (e.g., AWS EBS, NFS). This can provide greater resilience and scalability as the storage is independent of the node’s lifecycle.</p>
</li>
</ul>
</li>
<li><p><strong>Stateful Applications</strong>:</p>
<ul>
<li><p>Volumes are essential for stateful applications, such as databases, which need to retain data even when Pods are restarted or rescheduled.</p>
</li>
<li><p>Kubernetes itself does not manage database activities like replication or backups. These need to be handled by the database software or external tools.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Example Scenario:</strong></p>
<p>Consider a database Pod that requires persistent storage. Without a Volume, if the Pod dies, all data stored in the container’s filesystem would be lost. By attaching a Volume, the data is stored persistently, ensuring it is retained across Pod restarts.</p>
<h3 id="heading-deployment-and-stateful-sets">Deployment and Stateful Sets:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716193142414/7e0e5228-7984-4016-87ca-cb00df5bdd2a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-what-is-a-deployment"><strong>What is a Deployment:</strong></h3>
<p>A Deployment in Kubernetes is a higher-level abstraction that manages the desired state of a set of Pods. It provides mechanisms to deploy, update, and scale applications without manual intervention, ensuring high availability and fault tolerance.</p>
<p><strong>Key Characteristics of Deployments:</strong></p>
<ol>
<li><p><strong>Replica Management</strong>: Specify the desired number of Pod replicas. Kubernetes ensures the specified number is running at all times.</p>
</li>
<li><p><strong>Rolling Updates</strong>: Allows updating the application without downtime by gradually replacing old Pods with new ones. This ensures zero downtime for end-users during updates.</p>
</li>
<li><p><strong>Rollback</strong>: If a new deployment causes issues, you can easily roll back to a previous version.</p>
</li>
<li><p><strong>Self-Healing</strong>: If a Pod dies, the Deployment automatically creates a new Pod to maintain the desired number of replicas.</p>
</li>
</ol>
<h3 id="heading-what-is-stateful-sets">What is Stateful Sets:</h3>
<p>StatefulSet is a Kubernetes object used to manage stateful applications. Unlike Deployments, StatefulSets provide guarantees about the ordering and uniqueness of Pods, making them ideal for applications that require stable, unique network identifiers or stable storage.</p>
<p><strong>Key Characteristics of StatefulSets:</strong></p>
<ol>
<li><p><strong>Stable, Unique Pod Identities</strong>: Each Pod gets a unique, stable network identity (hostname). Pods are named in a predictable, consistent manner.</p>
</li>
<li><p><strong>Ordered, Graceful Deployment and Scaling</strong>: Pods are created, deleted, and scaled in a specific, defined order, ensuring that dependencies are respected.</p>
</li>
<li><p><strong>Persistent Storage</strong>: Each Pod in a StatefulSet can have its own persistent storage, defined via PersistentVolumeClaims. This ensures data is preserved across Pod restarts.</p>
</li>
<li><p><strong>Pod Management Policy</strong>: Pods can be managed in either OrderedReady (Pods are started sequentially) or Parallel (Pods are started simultaneously) fashion.</p>
</li>
</ol>
<p>In a nutshell, <strong>Deployments</strong> manage stateless application Pods, ensuring high availability by distributing them across nodes and using Services for load balancing. Whereas, <strong>StatefulSets</strong> manage stateful Pods, ensuring each Pod has a stable network identity and persistent storage. They are often used for databases that require consistent and stable storage.</p>
<p>Since we've explored most of the K8s components, let's dive deep into K8s architecture</p>
<h3 id="heading-k8s-architecture">K8s Architecture:</h3>
<p>In Kubernetes, the architecture is divided into two main components: <strong>the Control Plane</strong> and the <strong>Worker Nodes</strong>. The Worker Nodes are the backbone of Kubernetes, responsible for running the actual applications in the form of Pods. Each Worker Node is a machine that performs the necessary operations to run containers.</p>
<h3 id="heading-worker-nodes">Worker Nodes</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716195359120/09076e7f-e72e-4941-965b-98ca749722d1.png" alt class="image--center mx-auto" /></p>
<p>Each worker nodes should have 3 processes:</p>
<ol>
<li><p>Container Runtime</p>
</li>
<li><p>Kubelet</p>
</li>
<li><p>Kube Proxy</p>
</li>
</ol>
<h3 id="heading-container-runtime">Container Runtime:</h3>
<p>The Container Runtime is responsible for running the containers on each Worker Node. It is the software component that executes containerized applications and manages their lifecycle.</p>
<p><strong>Key Functions:</strong></p>
<ul>
<li><p><strong>Container Execution:</strong> Starts and stops containers based on the instructions from the Kubelet.</p>
</li>
<li><p><strong>Image Management:</strong> Pulls container images from container registries and caches them locally.</p>
</li>
<li><p><strong>Resource Isolation:</strong> Ensures that containers have the required resources (CPU, memory, etc.) and isolates them from each other using namespaces and control groups (cgroups).</p>
</li>
</ul>
<p><strong>Examples of Container Runtimes:</strong></p>
<ul>
<li><p><strong>Docker:</strong> The most commonly used container runtime, known for its wide adoption and rich feature set.</p>
</li>
<li><p><strong>containerd:</strong> A lightweight runtime that provides the core container functionality.</p>
</li>
<li><p><strong>CRI-O:</strong> An Open Container Initiative (OCI) compatible runtime optimized for Kubernetes.</p>
</li>
</ul>
<h4 id="heading-2-kubelet"><strong>2. Kubelet</strong></h4>
<p>Kubelet is an agent that runs on each Worker Node and ensures that containers are running in a Pod as expected. It acts as the bridge between the Kubernetes Control Plane and the Worker Node.</p>
<p><strong>Key Functions:</strong></p>
<ul>
<li><p><strong>Pod Management:</strong> Receives Pod specifications from the Control Plane and ensures that the specified containers are running and healthy.</p>
</li>
<li><p><strong>Node Status:</strong> Continuously reports the status of the node and its workloads back to the Control Plane.</p>
</li>
<li><p><strong>Health Monitoring:</strong> Monitors the health of the containers and takes corrective actions, such as restarting containers if they fail.</p>
</li>
</ul>
<p><strong>Responsibilities:</strong></p>
<ul>
<li><p><strong>Node Registration:</strong> Registers the node with the Kubernetes API server.</p>
</li>
<li><p><strong>Pod Lifecycle Management:</strong> Creates, updates, and destroys Pods as per the instructions from the Control Plane.</p>
</li>
<li><p><strong>Resource Monitoring:</strong> Tracks the resource usage of Pods and containers on the node.</p>
</li>
</ul>
<h4 id="heading-3-kube-proxy"><strong>3. Kube Proxy</strong></h4>
<p>Kube Proxy is a network proxy that runs on each Worker Node and maintains network rules for communication within the Kubernetes cluster.</p>
<p><strong>Key Functions:</strong></p>
<ul>
<li><p><strong>Network Routing:</strong> Forwards requests to the appropriate Pods across nodes in the cluster.</p>
</li>
<li><p><strong>Service Discovery:</strong> Enables Pods to find and communicate with each other using Kubernetes Services.</p>
</li>
<li><p><strong>Load Balancing:</strong> Distributes traffic among the Pods in a Service to ensure even workload distribution.</p>
</li>
</ul>
<p><strong>Responsibilities:</strong></p>
<ul>
<li><p><strong>IP Tables Management:</strong> Manages IP tables rules to ensure traffic is properly routed to the correct Pods.</p>
</li>
<li><p><strong>Service VIP Management:</strong> Handles the virtual IP addresses assigned to Services, making it possible for clients to access them without knowing the specifics of Pod IP addresses.</p>
</li>
</ul>
<p>Ok, so now we've understood the worker nodes in K8s architecture. But we still have some unanswered questions, let analyze them one by one</p>
<ol>
<li><p>How would one interact with this cluster ?</p>
</li>
<li><p>On which pod should the new pod be scheduled ?</p>
</li>
<li><p>If a replica pod dies, who would monitor it and then reschedules it ?</p>
</li>
</ol>
<p>So, all these aforementioned tasks are managed by master nodes (control plane). Let's analyze control plan in depth.</p>
<h3 id="heading-master-nodes">Master Nodes:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716196870025/327c5a36-3473-4e47-8be6-8484ba1f57fb.png" alt class="image--center mx-auto" /></p>
<p>Worker nodes handle the grunt work of running containerized applications within Kubernetes. However, the brains of the operation reside in the master nodes, which collectively form the Kubernetes control plane. The control plane is responsible for managing the entire cluster, ensuring worker nodes are utilized efficiently and applications run smoothly.</p>
<p>There are four key processes that work together in the control plane:</p>
<ol>
<li><p><strong>API Server:</strong> This is the front-end for the control plane. It acts as the single point of entry, accepting requests from tools like <code>kubectl</code> (the Kubernetes command-line tool) or programmatic interactions from applications. The API server validates these requests against Kubernetes API definitions and then interacts with other control plane components to fulfill them.</p>
</li>
<li><p><strong>Scheduler :</strong> As the name suggests, the scheduler is responsible for placing new or rescheduled pods onto worker nodes. The API server sends pod information to the scheduler, which considers factors like resource availability, node health, and existing deployments to determine the optimal placement for each pod.</p>
</li>
<li><p><strong>Controller Manager:</strong> This is the workhorse of the control plane, running multiple controllers in the background. Each controller is responsible for maintaining the desired state of a specific Kubernetes resource (e.g., pods, deployments, services). The controller manager constantly monitors the cluster state through the API server and takes corrective actions if any resource deviates from its desired state. For instance, if a pod crashes unexpectedly, the replication controller will launch tell scheduler to schedule a new replica to maintain the desired number of pods running.</p>
</li>
<li><p><strong>etcd:</strong> Unlike the other three components, etcd is a separate process that acts as the distributed key-value store for Kubernetes. It stores all the cluster state information, including pod definitions, node statuses, and configuration data. The API server, scheduler, and controller manager all rely on etcd to access and update this critical information.</p>
</li>
</ol>
<p>Another important thing is to consider what if the master node (control plane) crashes. In order to make sure our cluster runs as intended, we generally have 2 replica counts for control plane too.</p>
<p>Thanks alot for reading by understanding these fundamental components and architecture, you'll gain a solid foundation for exploring Kubernetes and its capabilities in managing containerized applications.</p>
]]></content:encoded></item><item><title><![CDATA[Sailing Smoothly with Docker: How to dockerize a Spring Boot Application.]]></title><description><![CDATA[In the previous article, we explored the magic of Docker and its potential to revolutionize application deployment. Now, let's put this knowledge into practice by containerizing a Spring Boot application that interacts with a MySQL database. Buckle u...]]></description><link>https://www.azhanali.com/sailing-smoothly-with-docker-how-to-dockerize-a-spring-boot-application</link><guid isPermaLink="true">https://www.azhanali.com/sailing-smoothly-with-docker-how-to-dockerize-a-spring-boot-application</guid><category><![CDATA[Docker]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[docker images]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Springboot]]></category><category><![CDATA[Devops articles]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Mon, 22 Apr 2024 08:35:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713759486736/ab02dfaa-5f39-40fe-a7af-4b3c23eda6cc.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the previous article, we explored the magic of Docker and its potential to revolutionize application deployment. Now, let's put this knowledge into practice by containerizing a Spring Boot application that interacts with a MySQL database. Buckle up, as we leverage Docker and Docker Compose to create a portable, scalable, and efficient development environment!</p>
<p>In this article, we have used Spring Boot and MYSQL, but we can use the same techinque to dockerize apps based on other tech stacks.</p>
<h3 id="heading-understanding-the-spring-boots-application-structure">Understanding the Spring Boot's Application Structure:</h3>
<p>Please find the link of the Spring Boot's application, which we are going to containerize in this article</p>
<p><a target="_blank" href="https://github.com/Azhan777/springboot-backend">Spring Boot Application Github's Link</a></p>
<p>Our Spring Boot's application is divided broadly in three layers:</p>
<ol>
<li><p>Controller: spring boot controllers act as the entry point for HTTP requests, routing them, binding data, orchestrating business logic, preparing responses, and interacting with clients</p>
</li>
<li><p>Service: perform business logic</p>
</li>
<li><p>Repository: acts as the abstraction layer for interacting with database</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713072656150/a1a71a6d-1030-4a6a-b833-fc6eb6935a7d.png" alt class="image--center mx-auto" /></p>
<p>The aforementioned, spring boot application contains following API's</p>
<ol>
<li><p>saveEmployee</p>
</li>
<li><p>getAllEmployees</p>
</li>
<li><p>getEmployeeById</p>
</li>
<li><p>updateEmployee</p>
</li>
<li><p>deleteEmployee</p>
</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-comment">#curl command to save the employee to the database </span>
curl -X POST \
  http://localhost:8080/api/employees \
  -H <span class="hljs-string">'Content-Type: application/json'</span> \
  -d <span class="hljs-string">'{
    "firstName": "John",
    "lastName": "Doe",
    "email": "john.doe@example.com"
  }'</span>
</code></pre>
<p>An employee record with email will be saved to the database.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#curl command to get all employees</span>
curl -X GET \ 
  http://localhost:8080/api/employees \          
  -H <span class="hljs-string">'Content-Type: application/json'</span>
</code></pre>
<p>You will get all employees which are persisted in the employees table</p>
<h3 id="heading-lets-containerize">Let's Containerize:</h3>
<p>To containerize our application, we'll create a Dockerfile. This acts as a blueprint, defining the environment and configuration needed to run the application within a Docker image. Let's start building our Dockerfile.</p>
<pre><code class="lang-dockerfile"><span class="hljs-comment">#FROM: Specifies the base image to build upon. In this case, it's maven:3.8.4-openjdk-17, which is an image containing Maven and OpenJDK 17.</span>
<span class="hljs-comment">#AS builder: Assigns a name (builder) to this stage. This allows for multi-stage builds, where you can have different stages for building and running your application.</span>
<span class="hljs-keyword">FROM</span> maven:<span class="hljs-number">3.8</span>.<span class="hljs-number">4</span>-openjdk-<span class="hljs-number">17</span> AS builder 

<span class="hljs-comment">#WORKDIR: Sets the working directory inside the container to /app. This is where subsequent commands will be executed.</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment">#Copies files from the Docker build context (the directory containing the Dockerfile) into the container's filesystem. The first . represents the source directory in the build context (current directory), and the second . represents the destination directory in the container (current working directory, which is /app).</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment">#RUN: Executes a command inside the container during the image build process. Here, it runs Maven to build the application. The options -B (batch mode) and -DskipTests (skips running tests) are Maven options. The clean package command cleans any previous build artifacts and packages the application into a JAR file.</span>
<span class="hljs-keyword">RUN</span><span class="bash"> mvn -B -DskipTests clean package</span>

<span class="hljs-comment">#Starts a new stage in the Dockerfile, using another base image (openjdk:17), which contains only the Java runtime environment (JRE) without Maven.</span>
<span class="hljs-keyword">FROM</span> openjdk:<span class="hljs-number">17</span>

<span class="hljs-comment">#Creates a mount point at /tmp in the container. Volumes are used to persist data outside the container's lifecycle. This can be useful for storing logs, configuration files, or any other data that needs to be preserved even if the container is deleted.</span>
<span class="hljs-keyword">VOLUME</span><span class="bash"> /tmp</span>

<span class="hljs-comment">#Informs Docker that the container will listen on port 8080 at runtime. However, this does not actually publish the port; it is merely a documentation for users of the image.</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>

<span class="hljs-comment">#Sets the working directory inside the container to /app again, just like in the previous stage.</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment">#Copies the JAR file built in the previous stage (builder) into the current stage. --from=builder specifies the stage to copy from, and /app/target/*.jar is the source path of the JAR file. app.jar is the destination path inside the current stage.</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=builder /app/target/*.jar app.jar</span>

<span class="hljs-comment">#Specifies the command that will be executed when the container starts. Here, it runs the Java application by executing the JAR file (app.jar) using the java -jar command.</span>
<span class="hljs-keyword">ENTRYPOINT</span><span class="bash"> [<span class="hljs-string">"java"</span>,<span class="hljs-string">"-jar"</span>,<span class="hljs-string">"app.jar"</span>]</span>
</code></pre>
<p>With our Dockerfile ready, let's build the image for our application.</p>
<pre><code class="lang-bash">docker build -t &lt;Image_Name&gt; .
</code></pre>
<p>Running our application directly from the image wouldn't be ideal, as it depends on MySQL. To manage these interconnected services, we'll leverage Docker Compose.</p>
<p>Docker Compose is a tool that simplifies running multi-container applications. It orchestrates the creation and management of linked containers, ensuring they share a network and can communicate seamlessly. Let's create a docker-compose.yml file to define this configuration.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">api_service:</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">always</span>
    <span class="hljs-comment"># Specifies that the api_service depends on the mysqldb service. This ensures that Docker Compose will start the mysqldb service before starting the api_service.</span>
    <span class="hljs-attr">depends_on:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mysqldb</span>
    <span class="hljs-comment"># Indicates that the Dockerfile for building the api_service image is located in the current directory (.). This means Docker Compose will build the image using the Dockerfile in the current directory.</span>
    <span class="hljs-attr">build:</span> <span class="hljs-string">.</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8080:8080"</span>
    <span class="hljs-comment"># Environment Variables which are needed by our Spring Boot Application</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">spring.datasource.url:</span> <span class="hljs-string">'jdbc:mysql://mysqldb:3306/ems?allowPublicKeyRetrieval=true&amp;useSSL=false'</span>
      <span class="hljs-attr">spring.datasource.username:</span> <span class="hljs-string">root</span>
      <span class="hljs-attr">spring.datasource.password:</span> <span class="hljs-string">root</span>
  <span class="hljs-attr">mysqldb:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">"mysql:latest"</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">3306</span><span class="hljs-string">:3306</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-attr">MYSQL_DATABASE:</span> <span class="hljs-string">ems</span>
      <span class="hljs-attr">MYSQL_ROOT_PASSWORD:</span> <span class="hljs-string">root</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">mysql-data:/var/lib/mysql</span>
<span class="hljs-attr">volumes:</span>
  <span class="hljs-attr">mysql-data:</span>
</code></pre>
<p>You can bring your multi-container application to life by running the <code>docker-compose up</code> command. This will create and start the services defined in your docker-compose.yml file.</p>
<pre><code class="lang-bash">docker-compose up
</code></pre>
<p>With Docker Compose managing everything, running <code>docker-compose up</code> will start both the Spring Boot and MySQL containers, establishing the necessary network connection for communication.<br />Now, let's test our application functionality by attempting to save an employee record.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713758703980/32e4d9cb-7737-4a91-b571-a8ad633c15e8.png" alt class="image--center mx-auto" /></p>
<p>Now, let's try to fetch all Employees</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1713758783738/9f49f00d-e8a9-4b01-ad44-a081d447cb75.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-whats-next">What's Next:</h3>
<p>Our exploration of Docker's layer-based image creation emphasizes the impact each layer has on the final image size. To streamline the build process, consider placing frequently updated sections, such as the "RUN mvn ..." command that builds your application, towards the end of your Dockerfile. This minimizes the number of layers rebuilt during code modifications, leading to faster deployments. By strategically structuring your Dockerfile, you can achieve significant efficiency gains. As part of this optimization, leverage the Multi-Stage File Structure, positioning the least changing elements at the start and frequently changing elements at the end.</p>
]]></content:encoded></item><item><title><![CDATA[Sailing Smoothly with Docker: Unleashing the Potential of Containerization]]></title><description><![CDATA[Imagine deploying your application with a simple command, anywhere in the world, on any server, with all its dependencies perfectly in place. No more configuration headaches, no more compatibility issues. This is the magic of Docker, a groundbreaking...]]></description><link>https://www.azhanali.com/sailing-smoothly-with-docker-unleashing-the-potential-of-containerization</link><guid isPermaLink="true">https://www.azhanali.com/sailing-smoothly-with-docker-unleashing-the-potential-of-containerization</guid><category><![CDATA[Docker]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[docker images]]></category><category><![CDATA[docker-network]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Wed, 21 Feb 2024 19:37:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708286496186/7942d7d5-ca0c-4c09-84d6-1e3cc0a23a95.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine deploying your application with a simple command, anywhere in the world, on any server, with all its dependencies perfectly in place. No more configuration headaches, no more compatibility issues. This is the magic of Docker, a groundbreaking containerization technology that's transforming the way we build, ship, and run applications.</p>
<p>Before learning what Docker is, I think it's pretty important too what it's replacing ?</p>
<h3 id="heading-before-docker-the-virtual-machine-era"><strong>Before Docker: The Virtual Machine Era</strong></h3>
<p>Imagine juggling multiple software projects, each requiring a specific operating system. Traditionally, this meant dedicating separate physical servers for each system, leading to resource underutilization and complex management. Thankfully, <strong>virtual machines (VMs)</strong> emerged as a game-changer.</p>
<p>VMs act as software-based computers within your physical server, allowing you to run multiple operating systems <strong>simultaneously</strong> on a single machine. This not only saves hardware costs but also improves resource utilization and flexibility. Each VM operates in its own isolated environment, ensuring software conflicts and security vulnerabilities don't spread. Think of it as dividing your physical server into individual virtual apartments, each with its own operating system and resources, managed by a central "building manager" called the <strong>hypervisor</strong>.</p>
<p>Sounds Confusing ? Let's understand via analogy</p>
<p>Imagine a physical apartment building. The entire building represents <strong>physical hardware</strong>. Each individual apartment is a <strong>virtual machine (VM)</strong>. The building manager, who assigns apartments and ensures everything runs smoothly, is the <strong>hypervisor</strong>. And the concept of dividing the building into separate, self-contained spaces is <strong>virtualization</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708285361228/5a7e3369-0681-4597-8f3e-0be10652b6fb.jpeg" alt class="image--center mx-auto" /></p>
<p>While VMs revolutionized software development, their resource overhead and complexity paved the way for lighter-weight and more efficient solutions like Docker, which we'll explore next...</p>
<h3 id="heading-docker-containers-take-center-stage"><strong>Docker: Containers Take Center Stage</strong></h3>
<p>Having explored the world of virtual machines, let's shift our focus to <strong>Docker</strong>, a revolutionary technology that takes application isolation and portability to a whole new level. While VMs virtualize the entire computer environment, including hardware and operating system, Docker adopts a different approach. It leverages the host machine's operating system kernel and creates <strong>isolated environments</strong> specifically for applications, often referred to as <strong>containers</strong>.</p>
<p>Imagine VMs as separate apartments within a building, each with its own complete set of utilities and resources. Docker containers, on the other hand, are more like individual rooms within a single apartment, sharing the underlying infrastructure while maintaining privacy and isolation for each application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708288472902/c2dd56ec-f2b9-4d31-a776-17ac9ecef9b5.png" alt class="image--center mx-auto" /></p>
<p><strong>Visualizing the Difference:</strong></p>
<p>Let's analyze the aforementioned diagram:</p>
<ul>
<li><p><strong>Host Operating System:</strong> This forms the foundation, representing the physical hardware and the underlying operating system of your machine.</p>
</li>
<li><p><strong>Docker Engine:</strong> This software layer sits on top of the host OS and manages the creation and execution of Docker containers.</p>
</li>
<li><p><strong>Containers:</strong> These are the isolated environments created by Docker, each containing an application along with its necessary libraries and dependencies. They share the host OS kernel but have their own user space, ensuring isolation and security.</p>
</li>
</ul>
<p><strong>Key Advantages of Docker:</strong></p>
<ul>
<li><p><strong>Lightweight:</strong> Unlike VMs, containers are much smaller and faster to start, making them ideal for modern microservices architectures and rapid application development.</p>
</li>
<li><p><strong>Portable:</strong> Docker containers package all the dependencies an application needs, making them easily transferable across different environments without configuration changes.</p>
</li>
<li><p><strong>Resource-efficient:</strong> Sharing the host OS kernel allows containers to utilize resources more efficiently compared to VMs, leading to better performance and cost savings.</p>
</li>
<li><p><strong>Scalable:</strong> You can easily scale your applications by adding or removing containers based on demand, providing flexibility and agility.</p>
</li>
</ul>
<h2 id="heading-important-docker-concepts">Important Docker Concepts:</h2>
<p>Before getting our hands dirty, lets learn some foundational concepts</p>
<p><strong>Docker file:</strong> It serves as blueprints for building Docker images. It specifies the operating system, dependencies, software installation steps, and configuration settings needed for creating a container. Think of it as detailed recipes for creating consistent and reproducible containers.</p>
<p><strong>Docker Networks:</strong> Containers by default operate in isolation. Docker networks enable you to define and connect containers together, allowing them to communicate and interact as needed. This is crucial for building multi-container applications where components need to collaborate.</p>
<p><strong>Volumes:</strong> Imagine data and configurations saved within your container. While the container is ephemeral, what if you need persistent storage? Docker volumes provide a solution by linking specific directories within your container to physical directories or cloud storage on the host machine. This ensures valuable data persists even after container restarts.</p>
<p><strong>Docker registries:</strong> Public and private registries serve as repositories for storing and sharing Docker images. Docker Hub is the most popular public registry, offering pre-built images for various applications and tools. However, private registries allow organizations to manage and securely share their own custom images within teams or across departments.</p>
<p><strong>Docker build:</strong> This command instructs Docker to use a Dockerfile and build a new image based on its instructions. This process involves downloading base images, installing software, and configuring the environment based on the specified steps.</p>
<p><strong>Docker run:</strong> This command instructs Docker to create and run a container from a specific image. You can specify additional options like environment variables, ports, and volumes to customize the container's behavior.</p>
<p><strong>Docker Compose:</strong> As you mentioned, this tool simplifies managing multi-container applications. It allows you to define all the required containers, their configurations, and relationships in a single YAML file. Docker Compose then builds and runs all the containers with a single command, streamlining development and deployment workflows.</p>
<p>In the next exciting chapter, we dive hands-on into containerizing your Spring Boot application! Unleash the power of Docker and experience faster deployments, improved scalability, and streamlined development. Buckle up and join us on this containerization journey!</p>
]]></content:encoded></item><item><title><![CDATA[Pakistan's Agricultural Crossroads: Navigating Challenges and Embracing Opportunities]]></title><description><![CDATA[Nestled at the crossroads of South Asia, Pakistan's sprawling landscapes not only paint a picturesque scene but also weave the very fabric of its economic identity. In this diverse tapestry of terrain, one sector emerges as the true backbone of the n...]]></description><link>https://www.azhanali.com/pakistans-agricultural-crossroads-navigating-challenges-and-embracing-opportunities</link><guid isPermaLink="true">https://www.azhanali.com/pakistans-agricultural-crossroads-navigating-challenges-and-embracing-opportunities</guid><category><![CDATA[pakistan]]></category><category><![CDATA[agriculture]]></category><category><![CDATA[agritech]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Sun, 03 Sep 2023 20:49:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693772108840/955fc0ec-f0c9-4d1b-99a1-de5a43692399.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Nestled at the crossroads of South Asia, Pakistan's sprawling landscapes not only paint a picturesque scene but also weave the very fabric of its economic identity. In this diverse tapestry of terrain, one sector emerges as the true backbone of the nation: agriculture. Stretching across 79.6 million hectares, with 23 million dedicated to cultivation and another 4.6 million cloaked in forests, Pakistan's agricultural sector stands as an economic giant with colossal influence. In the annals of global agriculture, Pakistan is not just another player; it is wielding a lush cornucopia that includes the 7th largest wheat harvest, the 5th highest cotton yield, and the coveted titles of the 4th largest sugar cane, mango, and 9th biggest rice producer. However, it's not just the sheer output that captivates the world; it's the magnitude of its impact on Pakistan itself. With an annual income of $38 billion, contributing a staggering 23% to the nation's GDP, employing 37.4% of its labor force, and accounting for 20% of its export revenue, Pakistan's agriculture sector is not merely a sector; it's an economic juggernaut that steers the course of the nation's prosperity.</p>
<p>Yet, amid these bountiful fields lies a thorny path, beset by challenges that threaten to stymie this agricultural powerhouse. A broken supply chain disrupts the flow of produce from farm to market, diminishing both profits and food security. The fragmentation of landholdings among small farmers prevents the realization of economies of scale, impeding efficiency and growth. Primitive farming methods persist, shackling the sector to dated practices. Excessive use of fertilizers and pesticides takes a toll on the environment and human health. The specter of water scarcity looms large, casting shadows over irrigation-dependent agriculture. Meanwhile, access to credit remains a hurdle for farmers seeking to invest in their livelihoods. In this blog, we embark on a journey through the fertile fields and innovative initiatives that define Pakistan's agricultural landscape, addressing the exciting prospects and transformative changes that promise to reshape the nation's future while confronting the formidable challenges that lie ahead.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693766202069/3878a224-16a3-4957-ae5b-d52c9bae2a3c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-broken-links-of-pakistans-agricultural-supply-chain"><strong>The Broken Links of Pakistan's Agricultural Supply Chain:</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693769849784/2f31eca4-ee1f-467f-a1f1-ec1197b36dfb.png" alt class="image--center mx-auto" /></p>
<p><em><mark>Figure 1: The Agriculture Supply Chain in Pakistan's Grain Market</mark></em></p>
<p>In the intricate web of Pakistan's agricultural supply chain, a vital component, the "Arthi" or commission agent, plays a central role. These agents act as intermediaries who facilitate the buying and selling of agricultural produce, including livestock, while also managing financial transactions. Operating predominantly in grain, fruit, and vegetable markets, Arthis serve as critical bridges connecting various stakeholders in the agricultural ecosystem.</p>
<p>The supply chain, as depicted in Figure 1 for grain markets for fruits and vegetables, exhibits intricate connections. In the grain market, Kacha Arthi and brokers often act as middlemen, connecting farmers with Pakka Arthis who are crop buyers. Kacha Arthi and brokers charge commissions on loans and secure crop titles to ensure sales to specified Arthis. This multi-layered structure can lead to increased costs for farmers, particularly when Thekedar (contractors) become involved in credit delivery.</p>
<p>In fruit and vegetable markets, Arthis primarily lend cash to merchants and transport companies, rarely dealing directly with farmers. These merchants or transporters, in turn, either provide loans to farmers or purchase crops directly, with the condition that the produce must be sold through the specified Arthi. Both scenarios require produce to be eventually funneled back to the Arthi who initially provided credit.</p>
<p>While Arthis offer valuable financial support to farmers, their services come at a cost, often in the form of high-interest rates or commissions. Farmers prefer this informal credit system over formal sources due to its flexibility, timely availability, and minimal collateral requirements. This preference persists despite the substantial costs, which can be four to five times higher than formal institutions.</p>
<p>The Arthi's role, though nuanced, can be seen as both a financial lifeline for farmers and a profit-generating mechanism. While some view Arthis as exploitative, others appreciate the crucial support they provide during crises. Nonetheless, the complex network of Arthis contributes to a broken supply chain, causing prices to surge by up to 150% from the farmer to the end consumer and leading to a staggering 30% wastage. Addressing the challenges within this intricate supply chain is essential for Pakistan's agriculture sector to thrive and benefit all stakeholders.</p>
<h3 id="heading-empowering-farmers-through-accessible-credit"><strong>Empowering Farmers through Accessible Credit</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693771968671/7cd8a542-578c-4d07-8dc6-3a6f4893ebce.jpeg" alt class="image--center mx-auto" /></p>
<p>One of the most pressing challenges faced by Pakistan's agriculture sector is the issue of credit accessibility for farmers. In a nation where a significant portion of the population relies on the informal sector, including commission agents (Aarthi), to meet their financial needs, the agricultural community often finds itself underserved by conventional commercial banks. These banks typically cater to individuals with conventional income sources, such as regular salary slips, making it difficult for farmers, who lack such documentation, to secure loans.</p>
<p>In this context, Pakistan urgently requires innovative companies and entities that can bridge the credit gap for farmers. Unlike conventional bank loans, which often fund non-productive expenses, loans in the agricultural sector have a direct impact on the nation's food supply. They enable farmers to invest in their crops, machinery, and resources, ultimately enhancing agricultural productivity.</p>
<p>Interestingly, the <em>brick-and-mortar</em> model proves to be highly effective in addressing this credit issue. Companies extending loans in rural areas can establish their asset/revenue engine within the rural landscape. In contrast, commercial banks, if they open branches in rural areas, often end up with a liability engine in rural locations, primarily focusing on collecting deposits rather than catering to the credit needs of the farming community.</p>
<p>Empowering farmers with accessible credit is not just a financial matter; it holds the key to bolstering Pakistan's food security, economic prosperity, and the well-being of its rural population. Innovative solutions and financial institutions that understand the unique needs of farmers are instrumental in shaping a more resilient and thriving agricultural sector.</p>
<h3 id="heading-unlocking-agricultural-efficiency-through-corporate-farming"><strong>Unlocking Agricultural Efficiency through Corporate Farming</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693770720482/290d2c58-c796-4794-b73e-5cc7b72f2b0b.jpeg" alt class="image--center mx-auto" /></p>
<p>A glaring challenge plaguing Pakistan's agricultural landscape lies in the fragmentation of landholdings among small-scale farmers, which hinders the realization of crucial economies of scale. The vast majority of agricultural operations in Pakistan are conducted on minuscule land parcels, with approximately 27 million acres cultivated on plots of six acres or less, on average. This division of land among numerous heirs, following the passing of the landowner, perpetuates inefficiency and stymies progress in the sector.</p>
<p>To combat this fragmentation and foster agricultural efficiency, the government has advocated for cooperative farming endeavors. These initiatives encourage small-scale farmers to band together, pool their resources, and collectively cultivate their land, thereby reaping the benefits of economies of scale. Among the pioneers of such corporate farming ventures is Jehangir Tarin, a name synonymous with successful agricultural enterprise in Pakistan. His corporation not only manages to produce crops at half the cost of small-scale farmers but also achieves per-acre yields that more than double those of traditional farming methods.</p>
<p>However, the untapped potential of Pakistan's agricultural landscape remains vast, with over 54 million acres of cultivable land, of which 27 million acres remain undeveloped. An innovative solution has emerged, proposing the leasing of vast virgin lands to corporate investors. If implemented transparently, this strategy could dramatically increase cultivation areas and boost crop sizes substantially, ushering in a new era of agricultural productivity.</p>
<p>The advantages of larger farms, fostered by economies of scale, are manifold. Consolidating land into more extensive holdings enables farmers to harness larger machinery, benefit from bulk purchases, and access specialized labor, ultimately leading to significant cost savings. Additionally, larger farms possess the capacity to overcome the infrastructure challenges that plague smaller holdings, such as fragmented irrigation systems, inadequate access roads, and insufficient storage facilities. The ability to invest in and maintain robust infrastructure positions larger farms as efficient and competitive players in the agricultural landscape.</p>
<p>Furthermore, the constraints imposed by small landholdings can limit diversification in agricultural activities. Diversification is pivotal for risk mitigation, resource optimization, and overall productivity enhancement. However, farmers with limited land may find themselves restricted to cultivating only a handful of crops, diminishing their ability to adapt to market demands and changing conditions.</p>
<p>Ultimately, transitioning towards larger, corporate-style farming operations presents a promising path forward for Pakistan's agricultural sector. As small-scale farmers shift towards high-value agriculture, such as fruit farming, they can unlock the potential for improved livelihoods and a more robust agricultural landscape, setting the stage for a sustainable and prosperous future.  </p>
<p><strong>Navigating Pakistan's Thirst for Water</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693771167295/3f663870-f11a-481e-8275-b2d1e4548b64.jpeg" alt class="image--center mx-auto" /></p>
<p>Amidst the rich tapestry of Pakistan's agriculture sector, a pressing concern looms large - water scarcity. Pakistan's status as the world leader in water consumption per GDP unit underscores the magnitude of this challenge. The agricultural industry stands as the primary culprit, accounting for the lion's share of water consumption. What's particularly alarming is that Pakistan yields one of the lowest agricultural outputs compared to its water consumption, signifying a stark inefficiency in resource utilization.</p>
<p>In response to this crisis, Pakistan unveiled its own water division plan in 1991, known as the Indus Water Division. This plan aimed to allocate the nation's water resources equitably, with a total water production of 117.35 million acre-feet. One critical aspect of this distribution is the allocation of water to the Indus Delta under the Kotri Barrage. The Indus Delta relies on freshwater to nurture sea forests, preventing the intrusion of saline surface water that could harm crops. This vital function is upheld through the allocation of 3 million acre-feet to Sindh.</p>
<p>The distribution of water resources among provinces is a topic of paramount importance. In accordance with the Indus Water Division, Punjab receives the largest share at 47%, followed by Sindh with 42%, KPK with 8%, and Baluchistan with 3%. However, these allocations only represent part of the equation. The capacity to store water is equally crucial. Pakistan's water storage capacity is alarmingly limited, providing only 30 days of water storage. In stark contrast, India boasts 190 days of storage, while the United States, with the Toledo River alone, enjoys a staggering 900 days of water storage.</p>
<p>Addressing Pakistan's water scarcity is not only vital for the agricultural sector but also for the nation's overall sustainability and prosperity. It necessitates comprehensive measures, from efficient water management and conservation practices to investment in water storage infrastructure. In doing so, Pakistan can secure a more stable and productive future for its agriculture and its people.</p>
<h3 id="heading-modernizing-agriculture-from-primitive-techniques-to-sustainable-practices"><strong>Modernizing Agriculture: From Primitive Techniques to Sustainable Practices</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693772784618/410ea898-cad0-4bc0-a4d4-86a2a7e61589.jpeg" alt class="image--center mx-auto" /></p>
<p>In the heart of Pakistan's agricultural landscape, a transformation is underway - a shift from primitive farming techniques to sustainable and efficient practices. At the forefront of this change is the urgent need to address two significant challenges plaguing the sector: the excessive use of fertilizers and outdated tilling methods.</p>
<p><strong>Excessive Use of Fertilizers: Nourishing the Soil, Not Poisoning It</strong></p>
<p>One of the gravest mistakes in agriculture is the excessive and indiscriminate use of fertilizers, particularly urea. Farmers often resort to throwing urea on their fields without considering the specific needs of their soil. This not only renders the soil unusable but also contributes to environmental degradation.</p>
<p>Pakistan's soil is predominantly high in pH, with an average pH level ranging between 8.1 and 8.2. This alkaline nature of the soil, rich in calcium, hampers the absorption of fertilizers by plants, rendering much of it ineffective. The common practice of using urea, a nitrogen-specific fertilizer, is not only the cheapest option but also exacerbates the problem.</p>
<p>Apart from negatively impacting crop yield, the excessive use of urea has dire consequences for the environment. It contaminates underground water sources, rendering them unsafe for human consumption. Moreover, it disrupts the delicate balance of nutrients in the soil, leading to long-term fertility issues.</p>
<p>A sustainable alternative is to conduct soil tests to determine the specific nutrient requirements of each field. This targeted approach ensures that fertilizers are applied in the right quantities and proportions, optimizing their effectiveness while minimizing their environmental impact.</p>
<p><strong>2. Tilling: From Destruction to Conservation</strong></p>
<p>Traditional tilling, a common practice in Pakistan's agriculture, involves plowing the soil before planting crops. However, this seemingly routine act has far-reaching consequences for soil health and the environment.</p>
<p>Tilling disrupts the natural ecosystem of the soil, destroying vital microorganisms and their food sources. As a result, farmers often find themselves compelled to compensate for this loss by using external sources of fertilizers. This creates a vicious cycle of dependence on chemical inputs, which not only erodes soil fertility but also harms the environment.</p>
<p>Moreover, the process of tilling releases carbon stored in the soil into the atmosphere, contributing to greenhouse gas emissions. This not only exacerbates climate change but also depletes the soil's carbon content, which is essential for its health and productivity.</p>
<p>Fortunately, there is a more sustainable path forward. Modern farming techniques, such as the use of seed drills, have the potential to replace traditional tilling methods. These drills plant seeds without disturbing the soil, preserving its natural balance of microorganisms and carbon content.</p>
<p>By adopting these sustainable practices, Pakistan's agriculture sector can mitigate the damage caused by excessive fertilizer use and outdated tilling methods. This transition not only promises healthier and more productive soils but also contributes to the broader goals of environmental conservation and food security. It's a step towards a more sustainable and prosperous future for Pakistan's agriculture.</p>
<h3 id="heading-conclusion-paving-the-way-for-a-thriving-agricultural-future"><strong>Conclusion: Paving the Way for a Thriving Agricultural Future</strong></h3>
<p>Pakistan's agriculture sector, with its vast potential and significant contributions to the economy, stands at a crucial crossroads. While it has played a pivotal role in feeding the nation and supporting millions of livelihoods, it faces a multitude of challenges that demand innovative and sustainable solutions.</p>
<p>The broken supply chain, characterized by the presence of commission agents known as "Aarthi," has resulted in increased costs, inefficiencies, and a lack of transparency. Fragmented landholdings have hindered the realization of economies of scale, making it difficult for small farmers to compete effectively. Water scarcity, exacerbated by Pakistan's high water consumption, poses a significant threat to agriculture's sustainability.</p>
<p>In the realm of finance, farmers often struggle to access credit through conventional banks due to the absence of collateral and formal documentation. Innovative companies are stepping in to bridge this gap, offering productive loans that impact the entire food supply chain positively.</p>
<p>Perhaps one of the most urgent challenges lies in the primitive techniques still prevalent in Pakistan's agriculture. Excessive fertilizer use and outdated tilling practices have deteriorated soil quality and harmed the environment. However, sustainable alternatives, such as targeted fertilization and modern farming methods, offer a path towards rejuvenating the land.</p>
<p>As Pakistan strives for agricultural excellence, it is essential to embrace change and modernization while preserving its agricultural heritage. By addressing these challenges head-on and implementing sustainable practices, Pakistan can unlock its agricultural potential, ensure food security, and contribute to the well-being of its people.</p>
<p>The journey towards a thriving agricultural future may not be without obstacles, but with determination, innovation, and a commitment to sustainable practices, Pakistan can cultivate a bountiful tomorrow—one where its agriculture sector remains the backbone of its economy and the sustenance of its people. Together, we can nurture the soil, empower the farmers, and reap the rewards of a flourishing agricultural landscape.</p>
]]></content:encoded></item><item><title><![CDATA[Harnessing the Power of India Stack: Key Takeaways for Pakistan]]></title><description><![CDATA[As I delved into the intricacies of India Stack, I found myself captivated by the magnitude and profound influence of India's remarkable digital infrastructure. It astounded me to witness the incredible achievements made by the country, prompting me ...]]></description><link>https://www.azhanali.com/harnessing-the-power-of-india-stack-key-takeaways-for-pakistan</link><guid isPermaLink="true">https://www.azhanali.com/harnessing-the-power-of-india-stack-key-takeaways-for-pakistan</guid><category><![CDATA[fintech]]></category><category><![CDATA[finance]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Sat, 03 Jun 2023 00:28:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/HNPrWOH2Z8U/upload/44f892473ffdc8717b0e5cb63cc12069.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As I delved into the intricacies of India Stack, I found myself captivated by the magnitude and profound influence of India's remarkable digital infrastructure. It astounded me to witness the incredible achievements made by the country, prompting me to contemplate the valuable insights that emerging nations like Pakistan can glean from this groundbreaking initiative. In this blog post, I aim to highlight the key takeaways that can empower Pakistan and how digital payment can eradicate its most pressing problems.</p>
<hr />
<h2 id="heading-the-genesis-of-india-stack">The Genesis of India Stack</h2>
<p>In 2008, the seeds of financial inclusion were sown in India, driven by the vision of extending financial services to individuals, particularly those belonging to economically disadvantaged strata. However, several formidable obstacles stood in the way of transforming this idea into reality.</p>
<ol>
<li><p>Access: A mere one in twenty-five people had access to a unique ID which made it impossible for financial institutions to do KYC (know your customer), which became an inevitable part, especially after 9/11.</p>
</li>
<li><p>Retention: The issue of retention arose once individuals were onboarded into the system. It became imperative to offer incentives that would encourage their active participation and increase their digital footprint.</p>
</li>
<li><p>Data empowerment: unlike many tech giants, India aspired to chart a different course. The nation sought to empower individuals by granting them control over their digital footprints and leveraging data for their benefit.</p>
</li>
</ol>
<p>In the subsequent sections of this blog post, we will delve into each of these challenges in meticulous detail, exploring how India ingeniously surmounted each impediment on its path toward comprehensive financial inclusion.</p>
<h3 id="heading-access">Access:</h3>
<p>When India started with India Stack back in 2008, only 17% of the population had bank accounts. And the biggest impediment to providing access to financial services was the procedure to uniquely identify an individual. To cater for this issue, India launched the Aadhar card. Aadhar is a 12-digit identification number which serves as proof of identity and proof of address for residents of India.</p>
<p>The way India adapted to the Aadhar card is nothing less than phenomenal. The following statistics bear testament to the magnitude of its impact:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685742073126/1c263056-493f-4864-a807-f41568a929f9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-foundational-role-of-aadhaar-in-indias-pursuit-of-financial-inclusion">The Foundational Role of Aadhaar in India's Pursuit of Financial Inclusion:</h3>
<p>As widely recognized, KYC (know your customer) procedures have become an obligatory requirement for opening bank accounts, with increased stringency following revised regulations by organizations like FATF (Financial Action Task Force) in the wake of significant events like the 9/11 attacks. Authentication further compounds the challenge of financial inclusion, as individuals must prove their identities before accessing financial services. India's answer to these predicaments lies in the Aadhaar card.</p>
<p>The Aadhaar card has emerged as a comprehensive solution, resolving both the hurdles of KYC and authentication. Its impact is strikingly evident, as indicated by the staggering figures: individuals have authenticated themselves over <strong>33 billion</strong> times using their Aadhaar identity, while banks have conducted more than <strong>7.5 billion</strong> verbose KYC verifications through Aadhaar. These statistics bear testimony to the vital role Aadhaar has played in establishing a solid foundation for India's relentless pursuit of financial inclusion.</p>
<h3 id="heading-retention">Retention:</h3>
<p>Having successfully integrated individuals into the financial ecosystem, the Indian government focused on fostering long-term engagement. To achieve this objective, they introduced UPI (Unified Payment Interface). Building upon the triumph of the Aadhaar card, India launched a real-time payment system that facilitated instant money transfers between diverse banks through smartphones.</p>
<p>The impact of UPI has been nothing short of astounding. In December 2022 alone, UPI processed an astonishing 7.28 billion transactions—a colossal figure for any real-time payment processing system. These numbers stand as a testament to the resounding success and widespread adoption of UPI, further solidifying its role in retaining users within the digital financial landscape. In addition to retaining users, UPI has played a pivotal role in expanding their digital footprints, empowering them to leverage these footprints to access loans with greater efficiency.</p>
<h3 id="heading-data-empowerment">Data Empowerment:</h3>
<p>The aspect of data empowerment within the India Stack framework is still in its early stages but is steadily maturing. Its primary objective is to enable individuals to utilize their digital footprints as a means to access financial services.</p>
<p>This can be understood by the below-mentioned example.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1685747155569/85b9d585-6d93-41c3-9ed5-e5cd8d09765d.png" alt class="image--center mx-auto" /></p>
<p>To illustrate this concept, consider the following scenario: Imagine a small business owner seeking credit to expand their operations. Instead of relying solely on collateral, they can leverage their digital footprint as a basis for obtaining credit. This transformative approach not only incentivizes economic growth but also fosters increased employment opportunities, ultimately bolstering the overall purchasing power of the nation.</p>
<p>This example exemplifies how data empowerment within the India Stack framework holds immense potential, progressively paving the way for a more inclusive and dynamic financial landscape.</p>
<hr />
<h1 id="heading-key-insights-for-emerging-countries-like-pakistan">Key Insights for Emerging Countries like Pakistan:</h1>
<p>Goldman Sachs identifies Pakistan as one of the eleven countries with significant potential to become one of the world's largest economies. However, to realize this potential, emerging countries like Pakistan must take crucial steps to document their economy, a task that cannot be accomplished without digitizing access to financial services. According to a Gallup Survey conducted in January 2021, approximately 82% of the population in Pakistan possesses computerized national identification cards, which can serve as the foundation for the country's digital infrastructure, akin to India's Aadhaar cards.</p>
<p>By adopting a robust and scalable solution similar to India Stack, Pakistan can effectively address the following challenges:</p>
<ol>
<li><p>Tax Collection: Pakistan currently faces a disparity between tax collection, estimated at around <strong>6,500 billion</strong> Rupees annually, and expenditures, which exceed <strong>9,500 billion</strong> Rupees. To improve the tax-to-GDP ratio, Pakistan needs to incentivize digital payments by developing infrastructure akin to India Stack.</p>
</li>
<li><p>Enabling Environment for the Private Sector: A functional digital payment infrastructure is vital for the private sector to thrive. Pakistan should follow India's example and introduce scalable KYC and authentication solutions, enabling the private sector to scale rapidly.</p>
</li>
<li><p>Documentation: In sectors where cash transactions are prevalent, such as dairy, meat, fruits, and vegetables, the lack of digital payment options hampers economic documentation. The absence of a financial trail hinders tax collection targets set by the state. By embracing digital payments, Pakistan can effectively document its economy and address this issue.</p>
</li>
</ol>
<p>Implementing these measures will foster a more transparent and inclusive economic environment in Pakistan, supporting sustainable growth and paving the way for enhanced financial inclusion.</p>
]]></content:encoded></item><item><title><![CDATA[Why you should (not) use GraphQL ?]]></title><description><![CDATA[Recently, I read about the GraphQL which is a query language for APIs. I thought to give a try and share the experience with you.
What is GraphQL?
As described earlier, GraphQL is a query language for APIs. Basically, it gives clients the power to as...]]></description><link>https://www.azhanali.com/why-you-should-not-use-graphql</link><guid isPermaLink="true">https://www.azhanali.com/why-you-should-not-use-graphql</guid><category><![CDATA[software architecture]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[GraphQL]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Sun, 19 Feb 2023 11:03:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676754912127/926cf090-69bd-4cbe-9d03-6f605f7e8b1d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, I read about the GraphQL which is a query language for APIs. I thought to give a try and share the experience with you.</p>
<h2 id="heading-what-is-graphql">What is GraphQL?</h2>
<p>As described earlier, GraphQL is a query language for APIs. Basically, it gives clients the power to ask for exactly what they want.</p>
<p>To understand it further, consider that your backend APIs are consumed by two front-end interfaces: web and mobile. And for some reason, both interfaces require separate fields. Without GraphQL, either you need to implement two types of APIs, one for your web client and one for your mobile client, or you need to send the exact same data to both of your clients.</p>
<h3 id="heading-problems-with-sending-excess-data">Problems with sending excess data:</h3>
<ol>
<li><p>Slower API performance</p>
</li>
<li><p>Increased network bandwidth</p>
</li>
</ol>
<p>To solve this problem, GraphQL comes into play. It lets your client ask for what it wants and filters the results automatically depending on the client's needs. How do we use GraphQL?</p>
<p>To use GraphQL, we need to define the graphQL schema. The schema consists of types that define the fields inside it, as described in the image below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676755343933/f64fe999-52b0-4401-8919-8609b65dd6fb.png" alt class="image--center mx-auto" /></p>
<p>The ! at the end of the ID present in book type makes it a non-nullable type. The Query depicts the methods that endpoints can use.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676755511513/42b8297e-b122-470c-996d-a561d28de5fd.png" alt class="image--center mx-auto" /></p>
<p>Notice how I have used the allBooks and getBook methods defined in the Query type of the GraphQL schema.</p>
<h1 id="heading-testing-via-postman">Testing via Postman:</h1>
<p>Now, let's see how we can test the newly implemented GraphQL API. Look at the below-mentioned image. I requested three fields: title, description, and author. And the API has given me the exact same response.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676803871430/202f689c-00cb-4d84-95e0-4cd77f59eaf1.png" alt class="image--center mx-auto" /></p>
<p>Now, let's ask for some different parameters. And GraphQL has given the exact same response which we have asked for.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676803963559/20faa6ac-3d5c-4f39-9c1b-733f9f4ef6ae.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-problems-of-using-graphql">Problems of using GraphQL?</h2>
<ol>
<li><p>It is hard to cache the response. Although it is possible to cache the response, it becomes difficult as clients are free to ask for whatever field(s) they want.</p>
</li>
<li><p>Generally, when we use REST APIs, we find out if everything runs smoothly through the status code, but this is not the case with GraphQL. If you get an error in GraphQL, you need to parse the body.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676756361727/b407f3f9-7b00-451d-87c6-6c6bee7c64d4.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h2 id="heading-when-you-should-use-graphql">When you should use GraphQL ?</h2>
<ol>
<li><p>It is advisable to use GraphQL if you have a complex API.</p>
</li>
<li><p>If you have multiple interfaces, using GraphQL can be a good idea.</p>
</li>
<li><p>If you're worried about bandwidth, GraphQL might be a good option.</p>
</li>
</ol>
<p>when you are getting data from multiple sources. For example, if you are making a dashboard and want to receive data from multiple services, such as a logging service, an analytics service, and a monitoring service, It is a good idea to use GraphQL so that the client can specify exactly what it wants.  </p>
<p>I've also attached the github repository link incase you want to explore in depth.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Azhan7/Exploring-GraphQL">https://github.com/Azhan7/Exploring-GraphQL</a></div>
]]></content:encoded></item><item><title><![CDATA[Why Microservices are more complex than you think?]]></title><description><![CDATA[Microservices are a distributed system architecture, and they have lots of problems associated with them. Nowadays, more and more companies claim they are using microservices, whereas the reality is quite different from what they claim.
The defining ...]]></description><link>https://www.azhanali.com/why-microservices-are-more-complex-than-you-think</link><guid isPermaLink="true">https://www.azhanali.com/why-microservices-are-more-complex-than-you-think</guid><category><![CDATA[Microservices]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[design principles]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Azhan Ali]]></dc:creator><pubDate>Sun, 12 Feb 2023 22:07:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676239380826/012899f3-1a73-4110-bc33-d7739c86b0e1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microservices are a distributed system architecture, and they have lots of problems associated with them. Nowadays, more and more companies claim they are using microservices, whereas the reality is quite different from what they claim.</p>
<p>The defining characteristics of microservices are</p>
<ol>
<li><p>They are small.</p>
</li>
<li><p>Focused on One Task</p>
</li>
<li><p>Aligned with a bound context</p>
</li>
<li><p>Autonomous</p>
</li>
<li><p>Independently deployable</p>
</li>
<li><p>Loosely coupled</p>
</li>
</ol>
<h2 id="heading-they-are-small">They are small.</h2>
<p>Early pioneers of microservices used to respond to the question by saying that</p>
<blockquote>
<p>An ideal microservice fits inside James Lewis' head.<br />James Lewis was the one who popularised the idea of microservices. The rationale behind that was to compartmentalise a problem.</p>
</blockquote>
<h3 id="heading-how-to-measure-if-the-given-microservice-is-actually-micro">How to measure if the given microservice is actually micro</h3>
<p>If you can recode the microservice in a week or two, I think you are on the right path. That's one of the ways to measure if you are on the right scale.</p>
<h2 id="heading-focused-on-one-task">Focused on One Task</h2>
<p>To segregate a monolithic architecture, we must first determine which sub-problems our microservices should be divided into. Each microservice aims to achieve one task when viewed from the outside.</p>
<h2 id="heading-aligned-with-a-bounded-context">Aligned with a bounded context</h2>
<p>This idea comes from the book <strong>Domain-Driven Design</strong>, written by <strong>Eric Evans</strong>. As per Eric, "bounded context" can be defined as</p>
<blockquote>
<p>A defined part of software where particular terms, definitions, and rules apply in a consistent way.</p>
</blockquote>
<p>This means that if we fail to segregate our services well, breaking our application into microservices could actually backfire as it will make services tightly coupled.</p>
<h2 id="heading-autonomy">Autonomy</h2>
<p>The team handling each microservice should be autonomous in changing the implementation. The true value of microservices is that the autonomy and lack of coordination they provide enable organizations to scale quickly.</p>
<h2 id="heading-deployable-on-its-own">Deployable on its own</h2>
<p>Since microservices give the team options to deploy without being constrained by other teams, it makes their progress more significant.</p>
<p>Now the problem arises: if you always test your service with others before release, it is not independently deployable.</p>
<p>The real value that microservices bring to the table is that they can be built, tested, and deployed independently.</p>
<p>So, in short, one can say,</p>
<blockquote>
<p>Microservices are an organization's decoupling strategy.</p>
</blockquote>
<h2 id="heading-loosely-coupled">Loosely Coupled</h2>
<p>The interface to a microservice is a public API. Therefore, it should be changed with great care. If you keep changing the API too often, this defeats the whole purpose of the microservices architecture. Also, when consuming an API, use the minimum data that you can to reduce coupling.</p>
]]></content:encoded></item></channel></rss>