<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[sorXCode's Blog]]></title><description><![CDATA[sorXCode's Blog]]></description><link>https://sorxcode.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 12 Apr 2026 03:36:06 GMT</lastBuildDate><atom:link href="https://sorxcode.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Part 2: Building an HTTP Rate Limiter with Cloudflare Workers]]></title><description><![CDATA[In Part 1, we introduced Cloudflare Workers and the concept of middleware at the edge. We saw how Workers let you execute logic close to your users, and we discussed their V8 isolate runtime. Now it’s time to move from theory to practice.
In this art...]]></description><link>https://sorxcode.com/part-2-building-an-http-rate-limiter-with-cloudflare-workers</link><guid isPermaLink="true">https://sorxcode.com/part-2-building-an-http-rate-limiter-with-cloudflare-workers</guid><category><![CDATA[cloudflare-worker]]></category><category><![CDATA[Middleware]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[edge computing]]></category><category><![CDATA[proxy]]></category><category><![CDATA[rate-limiting]]></category><category><![CDATA[tokenbucket]]></category><dc:creator><![CDATA[Victor Adeyanju]]></dc:creator><pubDate>Sat, 22 Nov 2025 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764061651618/06cf7660-437d-4060-b9e5-c7a636e7bd32.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://sorxcode.com/part-1-understanding-cloudflare-workers-and-edge-middleware">In Part 1</a>, we introduced Cloudflare Workers and the concept of middleware at the edge. We saw how Workers let you execute logic close to your users, and we discussed their V8 isolate runtime. Now it’s time to move from theory to practice.</p>
<p>In this article, we’ll build a <strong>working HTTP rate limiter</strong>. You’ll learn:</p>
<ul>
<li>How to track request usage per client</li>
<li>The differences between <strong>fixed-window</strong> and <strong>token-bucket</strong> algorithms</li>
<li>How to implement a <strong>Durable Object</strong> in TypeScript</li>
<li>How to integrate the limiter into Worker middleware</li>
<li>How to test and benchmark your limiter at the edge</li>
</ul>
<p>By the end, you’ll have a foundation for building your own edge middleware that’s robust, efficient, and globally scalable.</p>
<hr />
<h2 id="heading-what-is-http-rate-limiting"><strong>What is HTTP Rate Limiting?</strong></h2>
<p>Rate limiting is the process of controlling how often clients can access an API. This is a common practice to throttle traffic, protect backend workloads, and prevent abuse. For rate-limiting to work, clients must be distinguishable; therefore, a unique identifier is often issued to the client (<em>e.g.</em>, API key, Client key), or the identifier should be automatically extracted from the client's request (e.g., IP Address).</p>
<p>There are diverse rate-limiting strategies, but for the scope of this article, we'll be exploring two (most common) strategies:</p>
<h3 id="heading-1-fixed-window-limiting">1. Fixed-Window Limiting</h3>
<p>The fixed-window algorithm counts requests in discrete time intervals, like 1-minute blocks. For example, if a client is allowed 10 requests per minute, the counter resets at the start of each minute. </p>
<p><strong>Pros:</strong></p>
<ul>
<li>Simple to implement</li>
<li>Easy to understand</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>“Burstiness” at the window boundary: a client could make 20 requests in quick succession if two windows overlap</li>
<li>Less smooth distribution of requests</li>
</ul>
<h3 id="heading-2-token-bucket-limiting">2. Token-Bucket Limiting</h3>
<p>The token-bucket algorithm is more flexible. Each client has a “bucket” of tokens. Every request consumes one token. Tokens are replenished at a fixed rate. If client's bucket is empty, then the client is blocked.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Smooth request flow</li>
<li>Handles bursts gracefully</li>
<li>Easy to tune for limits and refill rates</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Slightly more complex than fixed-window</li>
</ul>
<p>For edge deployments, token-bucket is usually preferable because it balances fairness and responsiveness. Think of your favourite rate-limited API service; there's a high chance that Token-Bucket Limiting (or its variant) is being used.</p>
<hr />
<h2 id="heading-setting-up-the-worker-and-durable-objects">Setting Up the Worker and Durable Objects</h2>
<p>To build a reliable rate limiter at the edge, we need a component that can hold state for each client; Cloudflare Durable Objects are perfect for this. They provide a single, consistent state instance per identifier, and all requests for that identifier are routed to the same object, no matter which Cloudflare data center handles them.</p>
<p>In this section, we’ll create a complete Durable Object–based rate limiting system with three different tiers:
Free, Premium, and Ultimate. Each tier behaves differently, allowing you to enforce limits based on a client’s subscription level.</p>
<p>Before diving into the object classes, it’s important to keep TypeScript aware of your Worker bindings.</p>
<h3 id="heading-note-run-wrangler-types-after-adding-durable-objects">Note: Run <code>wrangler types</code> After Adding Durable Objects</h3>
<p>Whenever you define new Durable Object classes or update your <code>wrangler.jsonc</code> bindings, regenerate your type definitions:</p>
<pre><code class="lang-bash">wrangler types
</code></pre>
<p>This keeps your editor and TypeScript compiler aligned with your Worker’s actual environment. It’s important after adding new Durable Object namespaces or changing class names, because the types govern autocompletion and type checking.</p>
<h3 id="heading-the-worker-routing-requests-with-the-correct-rate-limiter">The Worker: Routing Requests with the Correct Rate Limiter</h3>
<p>Here is the Worker entrypoint that receives incoming API requests, determines which rate tier the client belongs to, and asks the appropriate Durable Object whether the request should be allowed or not.</p>
<pre><code class="lang-ts"><span class="hljs-comment">// src/index.ts</span>

<span class="hljs-keyword">import</span> { DurableObject } <span class="hljs-keyword">from</span> <span class="hljs-string">"cloudflare:workers"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> Env {
  FREE_LIMITER: DurableObjectNamespace&lt;FreeRequestLimiter&gt;;
  PREMIUM_LIMITER: DurableObjectNamespace&lt;PremiumRequestLimiter&gt;;
  ULTIMATE_LIMITER: DurableObjectNamespace&lt;UltimateRequestLimiter&gt;;
}

<span class="hljs-comment">// Worker logic</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> {
  <span class="hljs-keyword">async</span> fetch(request, env, _ctx): <span class="hljs-built_in">Promise</span>&lt;Response&gt; {
    <span class="hljs-comment">// extract Client API_KEY</span>
    <span class="hljs-keyword">const</span> key = request.headers.get(<span class="hljs-string">"API_KEY"</span>);
    <span class="hljs-keyword">if</span> (key === <span class="hljs-literal">null</span>) {
      <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Could not determine client key"</span>, { status: <span class="hljs-number">400</span> });
    }

    <span class="hljs-comment">// determine rate limit tier</span>
    <span class="hljs-keyword">try</span> {
      <span class="hljs-comment">// in a production system, you would look up the key in a database for the tier</span>
      <span class="hljs-keyword">const</span> rateLimiter = key.startsWith(<span class="hljs-string">"ult_"</span>)
        ? env.ULTIMATE_LIMITER
        : key.startsWith(<span class="hljs-string">"pre_"</span>)
        ? env.PREMIUM_LIMITER
        : env.FREE_LIMITER;

      <span class="hljs-keyword">const</span> stub = rateLimiter.getByName(key);
      <span class="hljs-keyword">const</span> milliseconds_to_next_request =
        <span class="hljs-keyword">await</span> stub.getMillisecondsToNextRequest();

      <span class="hljs-keyword">if</span> (milliseconds_to_next_request &gt; <span class="hljs-number">0</span>) {
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Rate limit exceeded"</span>, { status: <span class="hljs-number">429</span> });
      }
    } <span class="hljs-keyword">catch</span> (error) {
      <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Could not connect to rate limiter"</span>, { status: <span class="hljs-number">502</span> });
    }

    <span class="hljs-comment">// proceed with normal request handling, e.g. proxy to origin</span>
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Request successful"</span>, { status: <span class="hljs-number">200</span> });
  },
} satisfies ExportedHandler&lt;Env&gt;;
</code></pre>
<p>The Worker does three things:</p>
<ul>
<li>Extracts the unique identifier, in this case, the API key.</li>
<li>Determines the appropriate rate tier using a simple prefix-based rule (in a real system, this would come from a database).</li>
<li>Sends the unique identifier to the Durable Object for that tier. If the limiter reports that the request should wait, the Worker immediately returns a 429.</li>
</ul>
<p>This structure separates the request logic from the rate-limiting logic, keeping everything clean and modular.</p>
<h3 id="heading-building-the-rate-limiter-durable-objects">Building the Rate Limiter Durable Objects</h3>
<p>The Durable Object itself implements the token-bucket style behaviour. To keep things maintainable, we define a BaseRateLimiter class that encapsulates the shared logic, and then three concrete classes that specify the performance characteristics for each tier.</p>
<pre><code class="lang-ts"><span class="hljs-comment">// src/index.ts</span>

<span class="hljs-comment">// Durable Object</span>
<span class="hljs-keyword">abstract</span> <span class="hljs-keyword">class</span> BaseRateLimiter <span class="hljs-keyword">extends</span> DurableObject {
  <span class="hljs-keyword">abstract</span> milliseconds_per_request: <span class="hljs-built_in">number</span>;
  <span class="hljs-keyword">abstract</span> milliseconds_for_updates: <span class="hljs-built_in">number</span>;
  <span class="hljs-keyword">abstract</span> capacity: <span class="hljs-built_in">number</span>;

  <span class="hljs-keyword">abstract</span> tokens: <span class="hljs-built_in">number</span>;

  <span class="hljs-keyword">async</span> getMillisecondsToNextRequest(): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">number</span>&gt; {
    <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.checkAndSetAlarm();

    <span class="hljs-keyword">let</span> delay = <span class="hljs-built_in">this</span>.milliseconds_per_request;
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.tokens &gt; <span class="hljs-number">0</span>) {
      <span class="hljs-built_in">this</span>.tokens -= <span class="hljs-number">1</span>;
      delay = <span class="hljs-number">0</span>;
    }

    <span class="hljs-keyword">return</span> delay;
  }

  <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> checkAndSetAlarm() {
    <span class="hljs-keyword">const</span> currentAlarm = <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ctx.storage.getAlarm();
    <span class="hljs-keyword">if</span> (currentAlarm == <span class="hljs-literal">null</span>) {
      <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.ctx.storage.setAlarm(
        <span class="hljs-built_in">Date</span>.now() + <span class="hljs-built_in">this</span>.milliseconds_for_updates
      );
    }
  }

  <span class="hljs-keyword">async</span> alarm() {
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.tokens &lt; <span class="hljs-built_in">this</span>.capacity) {
      <span class="hljs-built_in">this</span>.tokens = <span class="hljs-built_in">Math</span>.min(
        <span class="hljs-built_in">this</span>.capacity,
        <span class="hljs-built_in">this</span>.tokens + <span class="hljs-built_in">this</span>.milliseconds_for_updates
      );
      <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.checkAndSetAlarm();
    }
  }
}
</code></pre>
<p>This base class gives each tier its own “bucket” size, refill rate, and request cost.</p>
<p>When getMillisecondsToNextRequest() is called, the object does the following:</p>
<ul>
<li>Ensures an alarm is scheduled to refill tokens.</li>
<li>If tokens are available, it decrements one and returns 0, meaning the request is allowed immediately.</li>
<li>If tokens have run out, it returns a delay value indicating how long the client should wait.</li>
</ul>
<p>The refill logic runs inside the alarm() method, which Cloudflare triggers based on the alarm schedule. This is an efficient way of replenishing tokens at the edge without needing to calculate elapsed time for every request.</p>
<h3 id="heading-defining-each-tier">Defining Each Tier</h3>
<p>Each tier extends the base class, sets its own performance characteristics, and initializes the bucket with the corresponding capacity.</p>
<pre><code class="lang-ts"><span class="hljs-comment">// src/index.ts</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> UltimateRequestLimiter <span class="hljs-keyword">extends</span> BaseRateLimiter {
  milliseconds_per_request = <span class="hljs-number">100</span>; <span class="hljs-comment">// 1 request / 100ms = 10 req/sec</span>
  milliseconds_for_updates = <span class="hljs-number">5000</span>; <span class="hljs-comment">// refill every 5s</span>
  capacity = <span class="hljs-number">80</span>; <span class="hljs-comment">// burst capacity</span>
  tokens: <span class="hljs-built_in">number</span>;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">ctx: DurableObjectState, env: Env</span>) {
    <span class="hljs-built_in">super</span>(ctx, env);
    <span class="hljs-built_in">this</span>.tokens = <span class="hljs-built_in">this</span>.capacity;
  }
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> PremiumRequestLimiter <span class="hljs-keyword">extends</span> BaseRateLimiter {
  milliseconds_per_request = <span class="hljs-number">200</span>; <span class="hljs-comment">// 1 request / 200ms = 5 req/sec</span>
  milliseconds_for_updates = <span class="hljs-number">5000</span>; <span class="hljs-comment">// refill every 5s</span>
  capacity = <span class="hljs-number">30</span>; <span class="hljs-comment">// burst capacity</span>
  tokens: <span class="hljs-built_in">number</span>;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">ctx: DurableObjectState, env: Env</span>) {
    <span class="hljs-built_in">super</span>(ctx, env);
    <span class="hljs-built_in">this</span>.tokens = <span class="hljs-built_in">this</span>.capacity;
  }
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> FreeRequestLimiter <span class="hljs-keyword">extends</span> BaseRateLimiter {
  milliseconds_per_request = <span class="hljs-number">500</span>; <span class="hljs-comment">// 1 request / 500ms = 2 req/sec</span>
  milliseconds_for_updates = <span class="hljs-number">5000</span>; <span class="hljs-comment">// refill every 5s</span>
  capacity = <span class="hljs-number">10</span>; <span class="hljs-comment">// burst capacity</span>
  tokens: <span class="hljs-built_in">number</span>;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">ctx: DurableObjectState, env: Env</span>) {
    <span class="hljs-built_in">super</span>(ctx, env);
    <span class="hljs-built_in">this</span>.tokens = <span class="hljs-built_in">this</span>.capacity;
  }
}
</code></pre>
<p>Each tier defines three important parameters:</p>
<ul>
<li>milliseconds_per_request: how frequently a client is allowed to make a request</li>
<li>milliseconds_for_updates: how often to refill tokens</li>
<li>capacity: how many requests can be made in a burst before throttling begins</li>
</ul>
<p>Because everything is based on an abstract parent class, defining new tiers becomes straightforward.</p>
<h3 id="heading-binding-the-durable-objects-in-wranglerjsonc">Binding the Durable Objects in wrangler.jsonc</h3>
<p>Finally, all the three-tiered limiters binding should be created in the configuration:</p>
<pre><code class="lang-json"><span class="hljs-comment">// wrangler.jsonc</span>

<span class="hljs-string">"durable_objects"</span>: {
  <span class="hljs-attr">"bindings"</span>: [
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"FREE_LIMITER"</span>,
      <span class="hljs-attr">"class_name"</span>: <span class="hljs-string">"FreeRequestLimiter"</span>
    },
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"PREMIUM_LIMITER"</span>,
      <span class="hljs-attr">"class_name"</span>: <span class="hljs-string">"PremiumRequestLimiter"</span>
    },
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"ULTIMATE_LIMITER"</span>,
      <span class="hljs-attr">"class_name"</span>: <span class="hljs-string">"UltimateRequestLimiter"</span>
    }
  ]
}
</code></pre>
<p>Each binding points to its corresponding class, allowing the Worker entrypoint to retrieve the right limiter when processing requests. Then run <code>wrangler types</code> to generate the type definitions for the durable objects.</p>
<hr />
<h2 id="heading-deployment-local-and-cloudflare">Deployment: Local and Cloudflare</h2>
<p>Once your Worker and Durable Objects are ready, the final step is to run them locally and then publish them to Cloudflare. The process is simple, and it’s the same whether you’re building a rate limiter or a larger edge application.</p>
<h3 id="heading-running-locally">Running Locally</h3>
<p>Cloudflare’s local environment is started with:</p>
<pre><code class="lang-bash">wrangler dev
</code></pre>
<p>This runs your Worker on <strong>localhost:8787</strong> and also spins up local Durable Objects automatically. Nothing extra needs to be configured. Every API key you use will create or reuse its corresponding Durable Object instance exactly as it would when deployed to Cloudflare (production).</p>
<p>You can test requests immediately:</p>
<pre><code class="lang-bash">curl -H <span class="hljs-string">"API_KEY: pre_test123"</span> http://localhost:8787
</code></pre>
<h3 id="heading-deploying-to-cloudflare">Deploying to Cloudflare</h3>
<p>When everything works locally, deploy globally:</p>
<pre><code class="lang-bash">wrangler deploy
</code></pre>
<p>Wrangler builds your Worker, uploads it to Cloudflare’s edge network, creates Durable Object classes if needed, and returns a production URL.</p>
<p>You can test the deployed version the same way:</p>
<pre><code class="lang-bash">curl -H <span class="hljs-string">"API_KEY: pre_test123"</span> https://your-worker.your-subdomain.workers.dev
</code></pre>
<p>When deployed to Cloudflare, you can confirm that your rate-limiter durable objects are active by going to:</p>
<p><strong>Cloudflare Dashboard → Workers → Durable Objects</strong></p>
<p>This page shows all instances created by your Worker, along with their current storage and activity.</p>
<hr />
<h2 id="heading-fixed-window-implementation"><strong>Fixed-Window Implementation</strong></h2>
<p>Here's a code snippet of fixed-window implemetation for comparism:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">const</span> windowSize = <span class="hljs-number">60</span> * <span class="hljs-number">1000</span>; <span class="hljs-comment">// 1 minute</span>
<span class="hljs-keyword">let</span> counter = <span class="hljs-number">0</span>;
<span class="hljs-keyword">let</span> windowStart = <span class="hljs-built_in">Date</span>.now();

<span class="hljs-keyword">if</span> (<span class="hljs-built_in">Date</span>.now() - windowStart &gt; windowSize) {
  counter = <span class="hljs-number">0</span>;
  windowStart = <span class="hljs-built_in">Date</span>.now();
}

<span class="hljs-keyword">if</span> (counter &gt;= limit) {
  <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Too Many Requests"</span>, { status: <span class="hljs-number">429</span> });
}

counter++;
</code></pre>
<ul>
<li>Simple, but can cause bursts at window boundaries.</li>
</ul>
<hr />
<h2 id="heading-testing-and-benchmarking-at-the-edge"><strong>Testing and Benchmarking at the Edge</strong></h2>
<p>Once the Worker and Durable Object are deployed, you can test with:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Single request</span>
curl -i https://your-worker.your-subdomain.workers.dev/api/<span class="hljs-built_in">test</span>

<span class="hljs-comment"># Burst test</span>
<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> {1..30}; <span class="hljs-keyword">do</span> curl -s -o /dev/null -w <span class="hljs-string">"%{http_code}\n"</span> https://your-worker.your-subdomain.workers.dev/api/<span class="hljs-built_in">test</span>; <span class="hljs-keyword">done</span>
</code></pre>
<p>Metrics to monitor include:</p>
<ul>
<li>Number of requests blocked (<code>429</code>)</li>
<li>Average latency of Durable Object calls</li>
<li>Token refill consistency</li>
<li>Error rate</li>
</ul>
<p>Edge deployments generally have sub-millisecond response times for token checks. Using <code>wrangler dev</code> locally is great for functional tests, but benchmarking in production ensures you capture global latency patterns.</p>
<hr />
<h2 id="heading-observations-and-best-practices">Observations and Best Practices</h2>
<ul>
<li><p><strong>Keep state lightweight:</strong> Store only what’s necessary, like token counts and last refill timestamps. This ensures fast execution and minimal storage overhead.</p>
</li>
<li><p><strong>Avoid heavy computations in Durable Objects:</strong> CPU time is limited per execution. Complex logic can slow down requests and increase costs.</p>
</li>
<li><p><strong>Plan object keys carefully:</strong> Use one unique key per client to prevent race conditions and ensure accurate rate tracking.</p>
</li>
<li><p><strong>Provide client feedback:</strong> Return headers such as X-RateLimit-Remaining and Retry-After so clients know when to retry.</p>
</li>
<li><p><strong>Test for edge cases:</strong> Simulate bursts, multi-client traffic, and slow refill scenarios to ensure limits behave as expected.</p>
</li>
</ul>
<p>By following these practices, your rate limiter stays predictable, efficient, and ready to handle traffic at the edge.</p>
<hr />
<h2 id="heading-closing-thoughts">Closing Thoughts</h2>
<p>In this part, we moved from theory to practice by implementing an HTTP rate limiter on Cloudflare Workers using Durable Objects. We explored how to track client requests, enforce limits per tier, and handle bursts efficiently at the edge.</p>
<p>You also learned the difference between fixed-window and token-bucket strategies, why token-bucket is often better for edge deployments, and how to benchmark and tune your limiter for real-world traffic.</p>
<p>In Part 3, we’ll take these concepts further by extending rate limiting to WebSockets, building a modular middleware pipeline, and covering best practices for deploying, monitoring, and scaling edge traffic. By the end, you’ll have a foundation for production-ready, globally distributed request control.</p>
<p>Next: <strong>[Part 3: Building a WebSocket Rate Limiter with Cloudflare Workers]</strong></p>
]]></content:encoded></item><item><title><![CDATA[Part 1: Understanding Cloudflare Workers & Edge Middleware]]></title><description><![CDATA[Introduction — The Rise of Edge Computing
In the last decade, we’ve witnessed a massive shift from monoliths → microservices → serverless. Each step moved compute closer to flexibility, but not necessarily closer to users.
Edge computing changes that...]]></description><link>https://sorxcode.com/part-1-understanding-cloudflare-workers-and-edge-middleware</link><guid isPermaLink="true">https://sorxcode.com/part-1-understanding-cloudflare-workers-and-edge-middleware</guid><category><![CDATA[cloudflare-worker]]></category><category><![CDATA[Middleware]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[edge computing]]></category><category><![CDATA[proxy]]></category><dc:creator><![CDATA[Victor Adeyanju]]></dc:creator><pubDate>Fri, 14 Nov 2025 23:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763475731817/22f5de62-f12d-474a-9ad8-b47b20fbcd11.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction-the-rise-of-edge-computing">Introduction — The Rise of Edge Computing</h1>
<p>In the last decade, we’ve witnessed a massive shift from monoliths → microservices → serverless. Each step moved compute closer to flexibility, but not necessarily closer to users.</p>
<p>Edge computing changes that. It allows you to run code within the CDN network itself, milliseconds from the user. Cloudflare Workers make this possible by letting developers run JavaScript and TypeScript code in more than 300 data centers globally.</p>
<p>Running logic at the edge improves latency, enhances security, reduces costs, and scales automatically without traditional server provisioning. In this series, we will build production-ready middleware for HTTP and WebSocket traffic. The focus will be on rate limiting, but the patterns we discuss will apply to caching, authentication, and logging.</p>
<h2 id="heading-what-are-cloudflare-workers">What Are Cloudflare Workers?</h2>
<p>Cloudflare Workers are small JavaScript or TypeScript programs that run inside V8 isolates the same sandbox used by Chrome and Node.js, but stripped down and optimized for edge execution.</p>
<p>When a request hits your domain, it passes through Cloudflare’s network. Your Worker can then inspect or modify the request, authenticate or rate-limit the client, cache responses, transform data, or proxy WebSocket connections before reaching your origin servers.</p>
<p>Workers start in less than one millisecond and can handle millions of requests concurrently. They are ideal for ultra-fast, high-scale logic that runs close to the user.</p>
<h3 id="heading-how-workers-differ-from-lambda-and-friends">How Workers Differ from Lambda and Friends</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Cloudflare Worker</td><td>AWS Lambda</td><td>Vercel Functions</td></tr>
</thead>
<tbody>
<tr>
<td>Startup Time</td><td>~1 ms (no cold start)</td><td>100-500 ms</td><td>100-300 ms</td></tr>
<tr>
<td>Runtime</td><td>V8 Isolate (Web APIs)</td><td>Node.js Container</td><td>Node.js</td></tr>
<tr>
<td>Max CPU Time</td><td>50 ms (default plan)</td><td>up to 15 min</td><td>10 s</td></tr>
<tr>
<td>Deploy Scope</td><td>Global edge</td><td>Single region</td><td>Region + edge cache</td></tr>
<tr>
<td>State Model</td><td>Durable Objects &amp; KV</td><td>DB / External</td><td>DB / External</td></tr>
</tbody>
</table>
</div><p><strong>Workers are designed for short-lived, high-performance workloads. They are not suitable for long-running tasks or heavy compute operations.</strong></p>
<h3 id="heading-the-v8-isolate-runtime-browser-dna-at-the-edge">The V8 Isolate Runtime ⚙️ - Browser DNA at the Edge</h3>
<p>Because Cloudflare Workers run inside a V8 isolate, the environment is more like a browser than Node.js.</p>
<p><strong>What Works</strong>:</p>
<ul>
<li><p>Standard Web APIs such as <code>fetch</code>, <code>Request</code>, <code>Response</code>, <code>URL</code>, <code>Headers</code>, <code>FormData</code>, <code>ReadableStream</code>, and <code>Crypto</code>.</p>
</li>
<li><p>Modern ECMAScript syntax including <code>async/await</code> and modules.</p>
</li>
<li><p>Cloudflare bindings such as <code>KV</code>, <code>R2</code>, <code>Durable Objects</code>, <code>D1</code>, <code>Queues</code>, and <code>AI</code> integrations.</p>
</li>
</ul>
<p><strong>What Does Not Work:</strong></p>
<p>Node.js-specific APIs such as <code>fs</code>, <code>net</code>, <code>tls</code>, <code>dgram</code>, <code>path</code>, <code>os</code>, <code>cluster</code>, and <code>child_process</code> are not available. Packages that depend on these will fail.</p>
<p><strong>This means not all npm packages will work. Any library that depends on Node internals will break.</strong></p>
<h3 id="heading-alternatives-and-patterns">Alternatives and Patterns</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Node Package Type</td><td>Edge-Safe Alternative</td></tr>
</thead>
<tbody>
<tr>
<td>HTTP client (<code>axios</code>)</td><td>Native <code>fetch</code> API</td></tr>
<tr>
<td>Router (<code>express</code>)</td><td><code>hono</code> or <code>itty-router</code></td></tr>
<tr>
<td>Cache / Storage</td><td>Cloudflare KV / R2</td></tr>
<tr>
<td>Crypto helpers</td><td>WebCrypto API</td></tr>
<tr>
<td>Env config</td><td>Worker <code>Env</code> bindings / <code>wrangler.toml</code></td></tr>
</tbody>
</table>
</div><p><strong>A quick check:</strong> if the code runs in Chrome’s DevTools console, it probably runs in a Cloudflare Worker.</p>
<h2 id="heading-why-this-matters-for-middleware-design">Why This Matters for Middleware Design</h2>
<p>Building middleware on Workers means embracing Web APIs and message-based architecture, not Node servers.</p>
<p>Instead of spinning up a Redis client, you might use a Durable Object that acts as a distributed counter. Instead of file I/O, you rely on Cloudflare KV for persistence. Instead of Express middleware, you define lightweight fetch handlers.</p>
<p>These constraints lead to simpler, safer, and globally scalable patterns, but they also force you to design differently.</p>
<h2 id="heading-the-idea-of-edge-middleware">The Idea of Edge Middleware</h2>
<p>Think of middleware as the logic between a user’s request and your origin response.</p>
<pre><code class="lang-markdown"><span class="hljs-code">         ┌────────────────────┐
Client → │   Edge Middleware  │ → Backend API
         └────────────────────┘
                  ▲
            Logging, Auth,
            Rate Limiting</span>
</code></pre>
<p>At the edge, middleware can decide within milliseconds whether a request deserves to reach your backend or not.</p>
<p>Some of the benefits of this approach are:</p>
<ul>
<li><p>Better Security: Bad actors can be dropped early, even before reaching your back-end service.</p>
</li>
<li><p>Improved Performance: Reduce round-trips to origin.</p>
</li>
<li><p>Cost: Save bandwidth and compute.</p>
</li>
<li><p>Observability: Capture metrics closer to users, which can be further exported to Loki/Grafana/Mimir.</p>
</li>
</ul>
<h2 id="heading-when-to-use-edge-middleware">When to Use Edge Middleware</h2>
<p>Below are some real, production-ready workloads handled daily by Cloudflare Workers.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Scenario</td><td>Edge Solution</td><td>Benefit</td></tr>
</thead>
<tbody>
<tr>
<td>API rate limiting</td><td>Worker + Durable Object</td><td>Global throttling</td></tr>
<tr>
<td>Auth token check</td><td>Worker middleware</td><td>Drop unauthorized before origin</td></tr>
<tr>
<td>Request logging</td><td>Worker → Logs API</td><td>Low-latency telemetry</td></tr>
<tr>
<td>Feature flags</td><td>Worker KV lookup</td><td>Rollouts per region</td></tr>
<tr>
<td>WebSocket gateway</td><td>Worker + Durable Object hub</td><td>Real-time control at edge</td></tr>
</tbody>
</table>
</div><h2 id="heading-why-edge-rate-limiting-beats-traditional-approaches">Why Edge Rate Limiting Beats Traditional Approaches</h2>
<p>Traditional rate limiting relies on in-memory counters or Redis. This introduces latency and single points of failure.</p>
<p>Edge rate limiting solves this by:</p>
<ul>
<li><p>Storing counters in Durable Objects, automatically sharded by user key.</p>
</li>
<li><p>Evaluating requests close to the user.</p>
</li>
<li><p>Providing consistent global enforcement per client.</p>
</li>
</ul>
<p>This results in sub-millisecond enforcement with minimal backend dependency.</p>
<h2 id="heading-durable-objects-do">Durable Objects (DO)</h2>
<p>Durable Objects are stateful actors that maintain consistent state across requests.</p>
<p>A Durable Object can:</p>
<ul>
<li><p>Store data in memory.</p>
</li>
<li><p>Persist data between requests.</p>
</li>
<li><p>Receive messages via fetch().</p>
</li>
<li><p>Guarantee one instance per ID worldwide.</p>
</li>
</ul>
<p>Each API key or client can have a dedicated Durable Object instance. This is ideal for token buckets, request counters, or WebSocket hubs.</p>
<pre><code class="lang-md"><span class="hljs-code">      +---------------------------+
      | Durable Object: user_123  |
      | tokens = 5 of 10          |
      | lastRefill = 1699999999   |
      +---------------------------+</span>
</code></pre>
<hr />
<h2 id="heading-architecture-vision-for-this-series">Architecture Vision for This Series</h2>
<pre><code class="lang-md"><span class="hljs-code">                ┌────────────────────────────┐
                │        Cloudflare Edge     │
                │ ┌────────────────────────┐ │
Client → → → →  │ │ Worker (Middleware)    │ │ → Backend
                │ │ - Auth &amp; Rate Limit    │ │
                │ │ - WS Proxy Gateway     │ │
                │ └────────────────────────┘ │
                │ ┌────────────────────────┐ │
                │ │ Durable Object(s)      │ │
                │ │ - Per-user counters    │ │
                │ │ - Token buckets        │ │
                │ └────────────────────────┘ │
                └────────────────────────────┘</span>
</code></pre>
<p>The beauty of this design is that it scales automatically — no central Redis, no replication nightmares, no cold starts.</p>
<h2 id="heading-setting-up-the-environment">Setting Up the Environment</h2>
<ol>
<li>Install Wrangler</li>
</ol>
<pre><code class="lang-bash">npm install -g wrangler
</code></pre>
<p>Confirm:</p>
<pre><code class="lang-bash">npx wrangler --version
</code></pre>
<p><em>Note:</em> At the time of writing, Wrangler requires at least Node.js v20.0.0. Consider using a Node.js version manager such as <a target="_blank" href="https://github.com/nvm-sh/nvm">nvm</a> to switch node versions.</p>
<ol start="2">
<li>Login to Cloudflare</li>
</ol>
<pre><code class="lang-bash">npx wrangler login
</code></pre>
<ol start="3">
<li>Initialize a Worker Project</li>
</ol>
<pre><code class="lang-bash">npx wrangler init edge-middleware
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763476646424/9cfe4595-a640-4d4c-b272-7c2db2dc3008.png" alt class="image--center mx-auto" /></p>
<p>Response to all prompts are shared in the screenshot above. Your file/folder structure should be similar to this:</p>
<pre><code class="lang-bash">.
├── node_modules\
├── package-lock.json
├── package.json
├── src
│   └── index.ts
├── <span class="hljs-built_in">test</span>
│   ├── env.d.ts
│   ├── index.spec.ts
│   └── tsconfig.json
├── tsconfig.json
├── vitest.config.mts
├── worker-configuration.d.ts
└── wrangler.jsonc
</code></pre>
<p>We’ll define the RateLimiter Durable Object class in Part 2.</p>
<hr />
<h2 id="heading-your-first-hello-world-worker">Your First Hello-World Worker</h2>
<p>Try a simple fetch handler:</p>
<pre><code class="lang-ts"><span class="hljs-comment">//src/index.ts</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> {
  <span class="hljs-keyword">async</span> fetch(request: Request, env: Env, ctx: ExecutionContext): <span class="hljs-built_in">Promise</span>&lt;Response&gt; {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Hello from Cloudflare worker"</span>, {
      headers: { <span class="hljs-string">"content-type"</span>: <span class="hljs-string">"text/plain"</span> },
    });
  },
} satisfies ExportedHandler&lt;Env&gt;;
</code></pre>
<p>Run locally:</p>
<pre><code class="lang-bash">npx wrangler dev
</code></pre>
<p>Visit <a target="_blank" href="http://localhost:8787">http://localhost:8787</a> → you’ll see the response instantly.</p>
<hr />
<h2 id="heading-practice-simple-edge-middleware">Practice: Simple Edge Middleware</h2>
<p>Let’s add basic validation before proxying to a backend:</p>
<pre><code class="lang-ts"><span class="hljs-comment">//src/index.ts</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> {
  <span class="hljs-keyword">async</span> fetch(req: Request) {
    <span class="hljs-keyword">const</span> url = <span class="hljs-keyword">new</span> URL(req.url);

    <span class="hljs-keyword">if</span> (!url.pathname.startsWith(<span class="hljs-string">"/api"</span>)) {
      <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Forbidden"</span>, { status: <span class="hljs-number">403</span> });
    }

    <span class="hljs-keyword">const</span> backend = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"https://api.yourbackend.com"</span> + url.pathname, {
      headers: req.headers,
      method: req.method,
      body: req.body,
    });

    <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">new</span> Response(backend.body, backend);
    res.headers.set(<span class="hljs-string">"x-edge-checked"</span>, <span class="hljs-string">"true"</span>);
    <span class="hljs-keyword">return</span> res;
  },
};
</code></pre>
<p>You’ve now written an edge gateway that can inspect, block, and modify traffic before it reaches your origin. For local testing, I’ll run a basic HTTP server locally and use that as my backend endpoint.</p>
<pre><code class="lang-bash">python3 -m http.server 8001
</code></pre>
<p>You can replace <code>https://api.yourbackend.com</code> in the src/index.ts with <code>http://localhost:8001</code> if testing locally too.</p>
<h3 id="heading-debugging-and-local-testing">Debugging and Local Testing</h3>
<p>Wrangler provides a powerful local runtime which simulates the Cloudflare environment, including Durable Objects and bindings. You can get it started by running:</p>
<pre><code class="lang-bash">npx wrangler dev
</code></pre>
<h4 id="heading-simulate-requests">Simulate requests:</h4>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> {1..10}; <span class="hljs-keyword">do</span> curl -s -o /dev/null -w <span class="hljs-string">"%{http_code}\n"</span> http://localhost:8787/api/<span class="hljs-built_in">test</span>; <span class="hljs-keyword">done</span>
</code></pre>
<h3 id="heading-deployment">Deployment</h3>
<p>Deploying your worker to Cloudflare makes it globally accessible, and can be achieved by running:</p>
<pre><code class="lang-bash">npx wrangler deploy
</code></pre>
<p>Your Worker is instantly available and the link is displayed in the terminal, something like this: <code>https://&lt;your-project-id&gt;.workers.dev</code>. You can also add a custom domain to the worker via the Cloudflare worker dashboard.</p>
<h2 id="heading-observability-and-metrics">Observability and Metrics</h2>
<p>Cloudflare gives built-in analytics, but you can push custom metrics too:</p>
<ul>
<li><p><strong>Workers Logs API</strong>: stream structured logs.</p>
</li>
<li><p><strong>Durable Object storage</strong>: aggregate usage per key.</p>
</li>
<li><p><strong>Integrations</strong>: send logs to Grafana/Loki/Mimir (LGTM stack).</p>
</li>
<li><p><strong>Wrangler tail</strong>: live-view logs during dev.</p>
</li>
</ul>
<p>Since we interested in the live-view logs of the deployed worker, we can run:</p>
<pre><code class="lang-bash">npx wrangler tail
</code></pre>
<h2 id="heading-closing-thoughts">Closing Thoughts</h2>
<p>Cloudflare Workers change how we think about deploying logic on the internet. Instead of running code on a single server or region, the platform pushes your logic to the edge, closer to every user. This makes Workers feel fast in practice, even when doing simple tasks like modifying headers, validating requests, or shaping traffic before it reaches your backend.</p>
<p>Because Workers run inside a V8 isolate, they follow a “browser-style” environment. This means you won’t have access to traditional Node.js modules, but you do get lightweight execution, instant cold starts, and a highly secure sandbox. Once you work within those boundaries, the platform becomes surprisingly flexible and powerful.</p>
<p>Part 1 focused on building an intuition for Workers and the concept of middleware at the edge. In Part 2, we move from ideas to implementation. We’ll build a practical HTTP rate limiter using Durable Objects, compare popular algorithms, and walk through how they behave in real-world workloads.</p>
<p><a target="_blank" href="https://sorxcode.com/part-2-building-an-http-rate-limiter-with-cloudflare-workers">Next Up: <strong>[Part 2: Building an HTTP Rate Limiter with Durable Objects]</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[CTF: Ethernaut-0]]></title><description><![CDATA[Level 0: Hello Ethernaut
This level contains instruction on how to setup and connect via metamask on the Rinkeby network. Some helpful commands were also introduced;

player: displays the player's address.
getBalance(addr): displays ether balance in ...]]></description><link>https://sorxcode.com/ctf-ethernaut-0</link><guid isPermaLink="true">https://sorxcode.com/ctf-ethernaut-0</guid><category><![CDATA[Ethernaut]]></category><category><![CDATA[foundry]]></category><category><![CDATA[level-0]]></category><dc:creator><![CDATA[Victor Adeyanju]]></dc:creator><pubDate>Sun, 10 Jul 2022 19:51:06 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-level-0-hello-ethernaut">Level 0: Hello Ethernaut</h2>
<p>This level contains instruction on how to setup and connect via metamask on the Rinkeby network. Some helpful commands were also introduced;</p>
<ul>
<li><code>player</code>: displays the player's address.</li>
<li><code>getBalance(addr)</code>: displays ether balance in addr.</li>
<li><code>help</code>: displays in-game help menu</li>
<li><code>ethernaut</code>: game's main smart-contract</li>
<li><code>ethernaut.owner()</code>: queries and returns owner of ethernaut game.</li>
<li><code>contract</code>: level's contract ABI</li>
<li><code>contract.info()</code>: get level's info</li>
</ul>
<p>To complete this level:</p>
<ul>
<li>Click "Get new instance" button to get a new instance</li>
<li><code>await contact.info()</code> and follow prompts</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657461112517/aCJdd5RfC.png" alt="cmds.png" /></p>
<ul>
<li>click on the "Submit instance" button after signing the <code>.authenticate</code> transaction</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657461125458/9CGuM96xW.png" alt="completion.png" /> </p>
<ul>
<li>On completion, it shows the level's contract source code</li>
</ul>
<pre><code class="lang-solidity"><span class="hljs-comment">// SPDX-License-Identifier: MIT</span>
<span class="hljs-meta"><span class="hljs-keyword">pragma</span> <span class="hljs-keyword">solidity</span> ^0.6.0;</span>

<span class="hljs-class"><span class="hljs-keyword">contract</span> <span class="hljs-title">Instance</span> </span>{

  <span class="hljs-keyword">string</span> <span class="hljs-keyword">public</span> password;
  <span class="hljs-keyword">uint8</span> <span class="hljs-keyword">public</span> infoNum <span class="hljs-operator">=</span> <span class="hljs-number">42</span>;
  <span class="hljs-keyword">string</span> <span class="hljs-keyword">public</span> theMethodName <span class="hljs-operator">=</span> <span class="hljs-string">'The method name is method7123949.'</span>;
  <span class="hljs-keyword">bool</span> <span class="hljs-keyword">private</span> cleared <span class="hljs-operator">=</span> <span class="hljs-literal">false</span>;

  <span class="hljs-comment">// constructor</span>
  <span class="hljs-function"><span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span> _password</span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> </span>{
    password <span class="hljs-operator">=</span> _password;
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">info</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">pure</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span> (<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span></span>) </span>{
    <span class="hljs-keyword">return</span> <span class="hljs-string">'You will find what you need in info1().'</span>;
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">info1</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">pure</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span> (<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span></span>) </span>{
    <span class="hljs-keyword">return</span> <span class="hljs-string">'Try info2(), but with "hello" as a parameter.'</span>;
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">info2</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span> param</span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">pure</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span> (<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span></span>) </span>{
    <span class="hljs-keyword">if</span>(<span class="hljs-built_in">keccak256</span>(<span class="hljs-built_in">abi</span>.<span class="hljs-built_in">encodePacked</span>(param)) <span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-built_in">keccak256</span>(<span class="hljs-built_in">abi</span>.<span class="hljs-built_in">encodePacked</span>(<span class="hljs-string">'hello'</span>))) {
      <span class="hljs-keyword">return</span> <span class="hljs-string">'The property infoNum holds the number of the next info method to call.'</span>;
    }
    <span class="hljs-keyword">return</span> <span class="hljs-string">'Wrong parameter.'</span>;
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">info42</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">pure</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span> (<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span></span>) </span>{
    <span class="hljs-keyword">return</span> <span class="hljs-string">'theMethodName is the name of the next method.'</span>;
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">method7123949</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">pure</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span> (<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span></span>) </span>{
    <span class="hljs-keyword">return</span> <span class="hljs-string">'If you know the password, submit it to authenticate().'</span>;
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">authenticate</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> <span class="hljs-keyword">memory</span> passkey</span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> </span>{
    <span class="hljs-keyword">if</span>(<span class="hljs-built_in">keccak256</span>(<span class="hljs-built_in">abi</span>.<span class="hljs-built_in">encodePacked</span>(passkey)) <span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-built_in">keccak256</span>(<span class="hljs-built_in">abi</span>.<span class="hljs-built_in">encodePacked</span>(password))) {
      cleared <span class="hljs-operator">=</span> <span class="hljs-literal">true</span>;
    }
  }

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getCleared</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">view</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span> (<span class="hljs-params"><span class="hljs-keyword">bool</span></span>) </span>{
    <span class="hljs-keyword">return</span> cleared;
  }
}
</code></pre>
<h2 id="heading-review">Review 🕵️🕵️‍♀️🕵️‍♂️</h2>
<p>After having the source code for this level, I can say the technical objective of this level is to </p>
<blockquote>
<p>"set the <code>cleared</code> state variable to <code>true</code>".</p>
</blockquote>
<p>And the only obvious way to do that is to pass through the <code>authenticate</code> function, which requires us to know the <code>passkey</code>. The <code>passkey</code> is equivalent to the <code>password</code> state variable. The password is a public state variable and can be read directly via contract interaction.</p>
<h3 id="heading-test">Test</h3>
<p>To try this out, I'll request for a new instance and write the POC in solidity. 🤞🤞🤞🤞🤞</p>
<pre><code class="lang-solidity"><span class="hljs-comment">// SPDX-License-Identifier: MIT</span>
<span class="hljs-meta"><span class="hljs-keyword">pragma</span> <span class="hljs-keyword">solidity</span> ^0.6.0;</span>

<span class="hljs-keyword">import</span> <span class="hljs-string">"forge-std/Test.sol"</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">"@level0/instance.sol"</span>;

<span class="hljs-class"><span class="hljs-keyword">contract</span> <span class="hljs-title">POC</span> <span class="hljs-keyword">is</span> <span class="hljs-title">Test</span></span>{
    Instance instance <span class="hljs-operator">=</span> Instance(<span class="hljs-number">0x40123A7989f37D4075b5ac9a0b3818930A36F4ab</span>);

    <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">testHack</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">public</span></span> <span class="hljs-title"><span class="hljs-keyword">returns</span></span>(<span class="hljs-params"><span class="hljs-keyword">bool</span></span>)</span>{
        assertFalse(instance.getCleared());
        instance.authenticate(instance.password());
        assertTrue(instance.getCleared());
    }
}
</code></pre>
<h4 id="heading-test-output">Test output</h4>
<pre><code class="lang-shell">&gt; forge test -vvv --fork-url $RPC_URL
[⠑] Compiling...
No files changed, compilation skipped

Running 1 test for test/level0/poc.sol:POC
[PASS] testHack():(bool) (gas: 35290)
Test result: ok. 1 passed; 0 failed; finished in 5.58s
</code></pre>
<p>Our test works now. Let's migrate this to a script so that the data would be persisted on the blockchain, then we can click the "Submit instance" on the ethernaut page.</p>
<h3 id="heading-script">Script</h3>
<p>The script is similar to the test file;</p>
<pre><code class="lang-solidity"><span class="hljs-comment">// SPDX-License-Identifier: MIT</span>
<span class="hljs-meta"><span class="hljs-keyword">pragma</span> <span class="hljs-keyword">solidity</span> ^0.6.0;</span>

<span class="hljs-keyword">import</span> <span class="hljs-string">"forge-std/Script.sol"</span>;
<span class="hljs-keyword">import</span> <span class="hljs-string">"@level0/instance.sol"</span>;

<span class="hljs-class"><span class="hljs-keyword">contract</span> <span class="hljs-title">POC</span> <span class="hljs-keyword">is</span> <span class="hljs-title">Script</span> </span>{
    Instance instance <span class="hljs-operator">=</span> Instance(<span class="hljs-number">0x40123A7989f37D4075b5ac9a0b3818930A36F4ab</span>);

    <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">run</span>(<span class="hljs-params"></span>) <span class="hljs-title"><span class="hljs-keyword">external</span></span> </span>{
        vm.startBroadcast();

        <span class="hljs-built_in">require</span>(<span class="hljs-operator">!</span>instance.getCleared(), <span class="hljs-string">"Level completed already"</span>);
        instance.authenticate(instance.password());
        <span class="hljs-built_in">require</span>(instance.getCleared(), <span class="hljs-string">"failed to complete level"</span>);

        vm.stopBroadcast();
    }
}
</code></pre>
<h4 id="heading-script-output">Script output</h4>
<p>Hurray! It was a success!!! Here are the logs</p>
<pre><code class="lang-shell">&gt; forge script ./script/level0/poc.s.sol:POC --rpc-url $RPC_URL  --private-key $PRIVATE_KEY --broadcast -vvvv

[⠒] Compiling...
[⠆] Compiling 1 files with 0.6.12
[⠰] Solc 0.6.12 finished in 614.02ms
Compiler run successful
Traces:
  [38375] POC::run() 
    ├─ [0] VM::startBroadcast() 
    │   └─ ← ()
    ├─ [2450] 0x4012…f4ab::3c848d78() [staticcall]
    │   └─ ← 0x0000000000000000000000000000000000000000000000000000000000000000
    ├─ [3152] 0x4012…f4ab::224b610b() [staticcall]
    │   └─ ← 0x0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000a65746865726e6175743000000000000000000000000000000000000000000000
    ├─ [21513] 0x4012…f4ab::aa613b29(0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000a65746865726e6175743000000000000000000000000000000000000000000000) 
    │   └─ ← ()
    ├─ [450] 0x4012…f4ab::3c848d78() [staticcall]
    │   └─ ← 0x0000000000000000000000000000000000000000000000000000000000000001
    ├─ [0] VM::stopBroadcast() 
    │   └─ ← ()
    └─ ← ()


Script ran successfully.
Gas used: 38375
==========================
Simulated On-chain Traces:

  [47105] 0x4012…f4ab::aa613b29(0000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000a65746865726e6175743000000000000000000000000000000000000000000000) 
    └─ ← ()


==========================

Estimated total gas used for script: 47105
==========================

###
Finding wallets for all the necessary addresses...
##
Sending transactions [0 - 0].
⠁ [00:00:00] [#########################################################################################################################################] 1/1 txes (0.0s)
Transactions saved to: broadcast/poc.s.sol/4/run-latest.json

##
Waiting for receipts.
⠉ [00:00:18] [#####################################################################################################################################] 1/1 receipts (0.0s)
#####
✅ Hash: 0xa195faa5dd6cf7ca68e8e25c51396256b8c2877521737e40247c27eb62ded2e7
Block: 11001556
Paid: 0.000141315000423945 ETH (47105 gas * 3.000000009 gwei)


Transactions saved to: broadcast/poc.s.sol/4/run-latest.json



==========================

ONCHAIN EXECUTION COMPLETE &amp; SUCCESSFUL. Transaction receipts written to "broadcast/poc.s.sol/4/run-latest.json"

Transactions saved to: broadcast/poc.s.sol/4/run-latest.json
</code></pre>
<p>Clicking on the "Submit instance"  button after that:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657482356452/4LDstD2A-.png" alt="script.png" /></p>
<p>Cheers! 🏆🏆🏆🏆
Let's have some fun!!
Onto the next one!</p>
<h1 id="heading-8jata">🚴</h1>
]]></content:encoded></item><item><title><![CDATA[CTF: Ethernaut]]></title><description><![CDATA[What is Ethernaut?
Ethernaut is an open-source capture-the-flag (CTF) game by the Openzeppelin Team to help improve smart contract development or security. The game is played on the Ethereum-Virtual-Machine (EVM); each level consists of a smart contr...]]></description><link>https://sorxcode.com/ctf-ethernaut</link><guid isPermaLink="true">https://sorxcode.com/ctf-ethernaut</guid><category><![CDATA[Ethernaut]]></category><dc:creator><![CDATA[Victor Adeyanju]]></dc:creator><pubDate>Sun, 10 Jul 2022 13:49:31 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-what-is-ethernaut">What is Ethernaut?</h2>
<p><a target="_blank" href="https://ethernaut.openzeppelin.com/">Ethernaut</a> is an open-source capture-the-flag (CTF) game by the Openzeppelin Team to help improve smart contract development or security. The game is played on the Ethereum-Virtual-Machine (EVM); each level consists of a smart contract to be hacked. At the time of writing, there are 27 levels, level 0 to level 26, of varying difficulty.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1657460575507/HcnhOfyuZ.png" alt="image.png" /></p>
<h2 id="heading-why-this-article">Why this article?</h2>
<p>Over the next few weeks, I'll be updating this post with a walkthrough of each level as I make progress. </p>
]]></content:encoded></item><item><title><![CDATA[Deploy and Access Flask App on Windows Server [No CGI]]]></title><description><![CDATA[Quite often you might find a need to deploy your flask application on a Windows Server manually. Though this might not really be a thing in the modern development space due to new technologies (e.g docker) that are revolutionising application develop...]]></description><link>https://sorxcode.com/deploy-and-access-flask-app-on-windows-server-no-cgi</link><guid isPermaLink="true">https://sorxcode.com/deploy-and-access-flask-app-on-windows-server-no-cgi</guid><category><![CDATA[Python 3]]></category><category><![CDATA[Python]]></category><category><![CDATA[windows server]]></category><category><![CDATA[Flask Framework]]></category><category><![CDATA[deployment]]></category><dc:creator><![CDATA[Victor Adeyanju]]></dc:creator><pubDate>Fri, 10 Dec 2021 07:23:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1639120854833/t8KDePUF_.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Quite often you might find a need to deploy your flask application on a Windows Server manually. Though this might not really be a thing in the modern development space due to new technologies (e.g docker) that are revolutionising application development and deployment; mine was prompted by a peer need. We were working around having a consistent development version and don't want to spend extra bucks/resources setting up a CI/CD and the likes so we opted to put it on an existing VPS. </p>
<p>I was saddled with the responsibility of setting this up, and that won't be much of a problem (at least I thought). Everything was going pretty well until I can't access the deployed application externally. All walkthrough I could find online require me to use FastCGI, but I was quite unlucky installing it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1637696247017/vMW-UgLqO.png" alt="Screenshot 2021-11-23 at 20.35.42.png" /></p>
<p>Then I found another way around...</p>
<h2 id="heading-open-port-for-tcp-access">Open port for TCP access</h2>
<p>To allow HTTP traffic to your application's port, we need to open a static port in the Windows firewall for TCP access. There are many <a target="_blank" href="https://www.google.com/search?q=enable+port+on+windows+server&amp;oq=enable+port+on+windows+server">ways this could be done</a>  (e.g CMD, control panel etc) but I used the CMD. Simply run to open port 8001</p>
<pre><code class="lang-bash">netsh advfirewall firewall add rule name=<span class="hljs-string">"TCP Port 8001"</span> dir=<span class="hljs-keyword">in</span> action=allow protocol=TCP localport=8001
</code></pre>
<p>In case you need to close the port, run the command below</p>
<pre><code class="lang-bash">netsh advfirewall firewall delete rule name=<span class="hljs-string">"TCP Port 8001"</span> protocol=TCP localport=8001
</code></pre>
<h2 id="heading-set-flask-server-host-to-0000">Set Flask server host to 0.0.0.0</h2>
<p>The default host used by Flask server is <code>127.0.0.1</code>, this make the server accessible only by the network serving Flask, i.e your local computer.
The last piece is to make the server publicly available; set the Flask host to <code>0.0.0.0</code>.</p>
<p>To change the port, simply run your Flask project using</p>
<pre><code class="lang-bash">flask run --host=0.0.0.0
</code></pre>
<h2 id="heading-access-the-application-over-the-internet">Access the application over the internet</h2>
<p>Your application is now published and can be accessed outside your server.
Use your machine IP/domain name and port. e.g <code>http://192.168.28.20:8001</code></p>
<p>Thanks for your time!</p>
]]></content:encoded></item><item><title><![CDATA[Python Type Hints and Speed]]></title><description><![CDATA[Python Type Hinting and Speed
Type Hints are annotations that specify the runtime type of value within your Python codes. This look statically typed, right?



This was specified in PEP 484 and introduced Python 3.5.
It’s intuitive to think there wou...]]></description><link>https://sorxcode.com/python-type-hints-and-speed</link><guid isPermaLink="true">https://sorxcode.com/python-type-hints-and-speed</guid><category><![CDATA[Python]]></category><dc:creator><![CDATA[Victor Adeyanju]]></dc:creator><pubDate>Wed, 04 Nov 2020 14:21:41 GMT</pubDate><content:encoded><![CDATA[<p><span class="w"></span></p>
<h1 id="python-type-hinting-and-speed">Python Type Hinting and Speed</h1>
<p>Type Hints are annotations that specify the runtime type of value within your Python codes. This look statically typed, right?</p>
<p><img src="https://miro.medium.com/max/60/1*llWP7DY29O8VF8gbe1pd2g.png?q=20" alt="Image for post" /></p>
<img alt="Image for post" src="https://miro.medium.com/max/1410/1*llWP7DY29O8VF8gbe1pd2g.png" />

<p>This was specified in PEP 484 and introduced Python 3.5.</p>
<p>It’s intuitive to think there would be some performance (speed) improvement at runtime since statically typed languages do not intrinsically need to check data type during runtime. Right? Let’s check it out</p>
<p><span class="ie fi bk if ig ih"></span><span class="ie fi bk if ig ih"></span><span class="ie fi bk if ig"></span></p>
<p>We will be using the classical Fibonacci series to check whether type hinting improves performance.</p>
<h1 id="fibonacci-without-type-hint">FIBONACCI WITHOUT TYPE HINT</h1>
<p><img src="https://miro.medium.com/max/60/1*QgkKyUtR30MogppNmFyfiA.png?q=20" alt="Image for post" /></p>
<img alt="Image for post" src="https://miro.medium.com/max/2090/1*QgkKyUtR30MogppNmFyfiA.png" />

<p>Fibonacci series without type hint.</p>
<p>The Fibonacci function would be called 10,000 times and the appending operation would be called 10,000 * 100 (the nth term).</p>
<p><img src="https://miro.medium.com/max/60/1*xQKlcw-zu1n4py4r5d1oeg.png?q=20" alt="Image for post" /></p>
<img alt="Image for post" src="https://miro.medium.com/max/2260/1*xQKlcw-zu1n4py4r5d1oeg.png" />

<p>Runtime profile</p>
<p>The program completed execution in approximately <em>0.705seconds</em>.</p>
<h1 id="fibonacci-with-type-hinting">FIBONACCI WITH TYPE HINTING</h1>
<p>Using <span id="rmm"></span> the same algorithm but with type hints.</p>
<p><img src="https://miro.medium.com/max/60/1*5jQiEy9Kkp-bEFK6oOnclg.png?q=20" alt="Image for post" /></p>
<img alt="Image for post" src="https://miro.medium.com/max/2254/1*5jQiEy9Kkp-bEFK6oOnclg.png" />

<p>The program completed execution in approximately <em>0.708seconds</em>.</p>
<p>The execution times are very close and the difference might be caused by some CPU state.</p>
<blockquote>
<p><a target="_blank" href="https://www.python.org/dev/peps/pep-0484/">It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.</a></p>
</blockquote>
<h1 id="why-type-hints-then">Why Type Hints then?</h1>
<p>Python core philosophy as summarized in <a target="_blank" href="https://en.wikipedia.org/wiki/Zen_of_Python">Zen of Python</a> include:</p>
<ul>
<li>Beautiful is better than ugly.</li>
<li>Explicit is better than implicit.</li>
<li>Simple is better than complex.</li>
<li>Complex is better than complicated.</li>
<li><strong>Readability counts.</strong></li>
</ul>
<p>Type Hints in Python would provide more readability to both human and statistical tools.</p>
<blockquote>
<p><a target="_blank" href="https://www.python.org/dev/peps/pep-0484/">This PEP[484] aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and (perhaps, in some contexts) code generation utilizing type information.</a></p>
</blockquote>
<h1 id="readability-counts">Readability COUNTS</h1>
]]></content:encoded></item></channel></rss>