<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <title>Posts tagged with “Python” on Mark van Lent’s weblog</title>
  <updated>2026-01-31T00:00:00+00:00</updated>
  <link rel="self" type="application/atom+xml" href="https://markvanlent.dev/tags/python/index.xml" hreflang="en"/>
  <id>tag:markvanlent.dev,2010-04-02:/tags/python/index.xml</id>
  <link rel="alternate" type="text/html" href="https://markvanlent.dev/tags/python/" hreflang="en"/>
  <author>
      <name>Mark van Lent</name>
      <uri>https://markvanlent.dev/about/</uri>
    </author>
  <rights>Copyright (c) Mark van Lent, Creative Commons Attribution 4.0 International License.</rights>
  <icon>https://markvanlent.dev/favicon.ico</icon>
  <entry>
    <title type="html"><![CDATA[FOSDEM 2026]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2026/01/31/fosdem-2026/" type="text/html" />
    <id>https://markvanlent.dev/2026/01/31/fosdem-2026/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="ansible" />
    <category term="conference" />
    <category term="docker" />
    <category term="infrastructure as code" />
    <category term="git" />
    <category term="python" />
    <category term="security" />
    
    <updated>2026-02-01T14:54:22Z</updated>
    <published>2026-01-31T00:00:00Z</published>
    <content type="html"><![CDATA[<p>January is already almost over, so time for <a href="https://fosdem.org/2026/">FOSDEM</a>,
the yearly <q>free event for software developers to meet, share ideas and
collaborate</q> in Brussels. <a href="/2025/02/01/fosdem-2025/">Last year</a> I
focussed on the Go track, this year I selected a mix of security and Python
related talks to attend.</p>
<h2 id="streamlining-signed-artifacts-in-container-ecosystems--tonis-tiigi">Streamlining Signed Artifacts in Container Ecosystems &mdash; Tonis Tiigi</h2>
<p>It&rsquo;s possible to sign Docker images, but at the moment most are actually not
signed. Also, users should understand what the signature is protecting and what
it&rsquo;s <em>not</em> protecting. We should not want signing just to tick a box on the
security checlist, but because of the security it adds. And we need something
simple: integrated with existing tools, should not slow down tools.</p>
<p>Buildkit powers &ldquo;<code>docker build</code>&rdquo; but is not limited to Dockerfiles. It&rsquo;s high
performance, can build complex builds and has caching.</p>
<p>A modern build is a graph of images, Git repositories, local files, etc. The
results are images, binaries, archives.</p>
<figure><img src="/images/fosdem2026_tonis_tiigi.jpg"
    alt="Photo of Tonis Tiigi explaining the graph that is modern software building"><figcaption>
      <p>Tonis Tiigi explaining that builds of modern software are a complex graph</p>
    </figcaption>
</figure>

<p>We need Supply-chain Levels for Software Artifacts (SLSA) provenance: what has
actually happened in the build? What was the build config? Et cetera. It&rsquo;s useful to
figure out how an artifact was built.</p>
<p>Buildkit does not sign images by default. GitHub has <a href="https://docs.github.com/en/packages/managing-github-packages-using-github-actions-workflows/publishing-and-installing-a-package-with-github-actions#publishing-a-package-using-an-action">an example in the
documentation</a>
to run a build with Buildkit and generate an artifact. It claims to generate an
<q>unforgeable statement</q>. But if your GitHub credentials are
leaked and the attacker can get your hands on the temporary signing key, they can
use it to sign their own artifacts.</p>
<p>Docker created the <a href="https://github.com/docker/github-builder">github-builder</a>
repository. It contains reusable GitHub Actions to securely build images. If you
use this, your images are signed to prove that they were built from a certain
repository, using the configured build steps. Where Buildkit (among other
things) provides isolation, <code>github-builder</code> provides signing context. It also
protects against build dependency leaks.</p>
<p>So that takes care of the signatures, but how do you verify them?</p>
<ul>
<li>The command &ldquo;<code>docker inspect</code>&rdquo; now shows verified signatures</li>
<li>You can manually verify it with <a href="https://github.com/sigstore/cosign">cosign</a></li>
<li>You can also use sigstore/policy-controller for Kubernetes</li>
</ul>
<p>Buildx also includes experimental Rego (Open Policy Agent) policy support. This
means you can write a matching policy for <code>Dockerfile</code>, e.g. <code>Dockerfile.rego</code>,
which is then automatically loaded. All build sources now need to pass policy
for the build to continue (images, Git repositories, URLs, etc).</p>
<p>You can do very complex stuff in the policies. As simple example Tonis showed:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-rego" data-lang="rego"><span class="line"><span class="cl"><span class="kd">package</span><span class="w"> </span><span class="nx">docker</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="n">allow</span><span class="w"> </span><span class="kd">if</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nx">input</span><span class="o">.</span><span class="nx">image</span><span class="o">.</span><span class="nx">repo</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">&#34;org/app&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nf">docker_github_builder_tag</span><span class="p">(</span><span class="nx">input</span><span class="o">.</span><span class="nx">image</span><span class="o">,</span><span class="w"> </span><span class="s2">&#34;org/app&#34;</span><span class="o">,</span><span class="w"> </span><span class="nx">input</span><span class="o">.</span><span class="nx">image</span><span class="o">.</span><span class="nx">tag</span><span class="p">)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="p">}</span><span class="w">
</span></span></span></code></pre></div><p>This policy should make sure that the image can only be built from this
repository and that the image tag should match the Git tag.</p>
<p>Summary:</p>
<ul>
<li>No reason not to sign</li>
<li>Not all signatures are equal</li>
<li>Software pulling packages should verify pulled content</li>
</ul>
<p><a href="https://fosdem.org/2026/schedule/event/HJAJTU-streamlining_signed_artifacts_in_container_ecosystems/">Link to the conference page</a></p>
<h2 id="sequoia-git-making-signed-commits-matter--neal-h-walfield">Sequoia git: Making Signed Commits Matter &mdash; Neal H. Walfield</h2>
<p>Version control systems (also known as VCSs) track the following:</p>
<ul>
<li>Changes to the code</li>
<li>Authorship</li>
<li>Other metadata</li>
<li>Commit message</li>
</ul>
<p>But the author can be faked: the metadata is set by the author, including the
author&rsquo;s name. After a quick &ldquo;<code>git config</code>&rdquo; command you can commit as anyone you
want, for example <a href="https://en.wikipedia.org/wiki/Linus_Torvalds">Linus Torvalds</a>.
Sure, GitHub could see that the committer (the one pushing the commit) and
author are different. However, this is not necessarily bad because we might
simply want to give proper attribution to the author of the commit.</p>
<p>And in theory the forge might also be compromised, or someone may have gotten
permission to push to the project.</p>
<p>To prevent impersonations, we can cryptographically prove who the author is by
signing the commits. But now the problem shifts to the certificates. Because
anyone can create a key with any name (again, for example Linus) attached to it.
So what does a signed commit mean now?</p>
<p>How can we be sure that the author is who they say they are? There are ways:</p>
<ul>
<li>You could talk to developer the verify</li>
<li>You could go to <a href="https://en.wikipedia.org/wiki/Key_signing_party">key signing parties</a></li>
<li>You can use a central authority that you trust (e.g.
<a href="https://keys.openpgp.org/">keys.openpgp.org</a>, the Linux developer keyring,
the <code>distributions-gpg-keys</code> package, or, if you trust Github, use
<code>github.com/&lt;username&gt;.gpg</code>)</li>
</ul>
<p>You can use the following command to show the Git log and the signatures on them:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">git log --show-signature
</span></span></code></pre></div><p>But now you need to actually check that the signatures are indeed made by the
certificates you trust.</p>
<p>It&rsquo;s up to the maintainers of the software to curate a list of contributors and
track when contributors join and leave (yes, there is a temporal element as
well). This is hard. Maintainer needs tooling. And you would want to detect
unauthorized commits (impersonation, a malicious forge, a machine in the middle
or for instance when project is given to a new maintainer by a forge/registry).</p>
<p>What does the solution look like?</p>
<ul>
<li>Clear semantics</li>
<li>The project itself maintains signing policy</li>
<li>Third party uses maintainers&rsquo; policy to authenticate project</li>
<li>Verification, not attestation: do not rely on any external authority</li>
</ul>
<p>(Note that the maintainers can still be socially engineered to include the key
of an attacker in their policy. So they still have to be careful about who is
added to the policy.)</p>
<p>Sequoia git provides:</p>
<ul>
<li>Specification</li>
<li>Config</li>
<li>Tooling</li>
</ul>
<p>With <a href="https://gitlab.com/sequoia-pgp/sequoia-git">Sequoia git</a> (which part of
the <a href="https://sequoia-pgp.org/">Sequoia PGP project</a>) you can have a signing
policy in an <code>openpgp-policy.toml</code> file in the project&rsquo;s Git repository. It
specifies users, their keys and their capabilities. You can use <code>sg-git</code> to help
maintain this file.</p>
<p>For instance to add user Alice and then describe the current policy, you can use
the following commands:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">sq-git policy authorize alice --committer &lt;cert&gt;
</span></span><span class="line"><span class="cl">sq-git policy describe
</span></span></code></pre></div><p>A commit is &ldquo;authenticated&rdquo; if at least one parent commit says the commit is
acceptable (via the policy). To verify that there is an authenticated path from
the current state back to a certain commit we trust, use this command:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">sq-git log --trust-root &lt;sha of trusted commit&gt;
</span></span></code></pre></div><p>Projects may have contributions from others that are not included in the policy.
To maintain an authenticated path when accepting the contribution, a trusted
author needs to merge the contribution via a merge commit that <em>is</em>
authenticated. (You may need to use the &ldquo;<code>--no-ff</code>&rdquo; on the merge to make sure
there is a merge commit though.)</p>
<p><a href="https://fosdem.org/2026/schedule/event/KFSUCW-sequoia-git/">Link to the conference page</a></p>
<h2 id="an-endpoint-telemetry-blueprint-for-security-teams--victor-lyuboslavsky">An Endpoint Telemetry Blueprint for Security Teams &mdash; Victor Lyuboslavsky</h2>
<p>With open source we can inspect something that is broken, we can change the
defaults. With security we are used to the opposite; it&rsquo;s a black box. We are
not used to owning the data. The data exists on the endpoints, but ownership is
transferred to a different team. How can we add more security in a way engineers
understand and can use?</p>
<p>Victor presents a blueprint with the following layers:</p>
<ul>
<li>Endpoint agents</li>
<li>Control layer</li>
<li>Ingestion, streaming &amp; storage</li>
<li>Detection</li>
<li>Correlation, intelligence and response</li>
</ul>
<p>The value is not in the layers themselves, but the boundaries. For example, the
ingestion should move the data reliably but should not care which tool collected
it. This makes them loosely coupled.</p>
<p>For endpoint agents Victor suggests
<a href="https://github.com/osquery/osquery">osquery</a> which allows basic questions about
endpoints. Data is structured and consistent. It aligns with open source values.
(Alternatives: scripts &amp; cron, log shippers like filebeat or tools like auditd
or Event Tracing for Windows.)</p>
<p>Controlling the data (the next layer) means that you want to have:</p>
<ul>
<li>Central config</li>
<li>Live queries</li>
<li>Consistent schemas</li>
</ul>
<p><a href="https://github.com/fleetdm/fleet">Fleet</a> (disclaimer: Victor works here) is
built to manage <code>osquery</code> at scale and a good candidate for this layer.</p>
<p>The control layer needs to work hand-in-hand with ingestion layer. The ingestion
layer moves data to downstream system. E.g. <a href="https://github.com/vectordotdev/vector">Vector</a> or
<a href="https://www.elastic.co/logstash">Logstash</a> can be used here.</p>
<blockquote>
<p>Ingestion isn&rsquo;t where you get clever. It&rsquo;s where you get reliable.</p></blockquote>
<p>Streaming decouples users from consumers and e.g. allows replay. Note that this
is an optional step and it would come <em>after</em> ingestion, not <em>in place of</em> it.
For instance <a href="https://kafka.apache.org/">Apache Kafka</a> can be used in this
layer. Ingestion absorbs the mess. Streaming preserves flexibility.</p>
<p>The storage layer is where telemetry becomes durable. It&rsquo;s about being able to
ask hard questions later. Examples of useful tools:
<a href="https://github.com/ClickHouse/ClickHouse">ClickHouse</a>,
<a href="https://www.elastic.co/elasticsearch">Elasticsearch</a> (which is better at text
search) and <a href="https://github.com/apache/iceberg">Iceberg</a> (which is slower for
active investigation).</p>
<p>For the detection layer you might want to use
<a href="https://github.com/SigmaHQ/Sigma">Sigma</a>. It provided portability. Rules are
translated to native SQL running on ClickHouse. Intent (Sigma signatures)
becomes execution (SQL query to get the data).</p>
<p>Finally the correlation layer: <a href="https://github.com/grafana/grafana">Grafana</a>
can be used for correlation and visualisation. Grafana can query ClickHouse.
Grafana also has alerting.</p>
<p>Note that response isn&rsquo;t just about automation. It&rsquo;s also to pause and ask
better questions. The correlation layer should focus on enabling humans to act.</p>
<p>Open endpoint telemetry is <strong>not</strong> an &ldquo;EDR killer&rdquo;. It does not replace it. It adds
diversity and complements other tools. It provides a second set of eyes.</p>
<p><a href="https://fosdem.org/2026/schedule/event/HYXTPH-endpoint-telemetry-blueprint/">Link to the conference page</a></p>
<h2 id="the-bakery-how-pep810-sped-up-my-bread-operations-business--jacob-coffee">The Bakery: How PEP810 sped up my bread operations business &mdash; Jacob Coffee</h2>
<p>Python loads imports eagerly by default. This leads to memory bloat and cold
start issues. Explicit lazy imports (see
<a href="https://peps.python.org/pep-0810/">PEP 810</a>) only import a module when it&rsquo;s
first accessed not when the import statement is executed.</p>
<p>Lazy import is scheduled to be included in Python 3.15 and looks like this:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="n">lazy</span> <span class="kn">import</span> <span class="nn">foo</span> <span class="kn">from</span> <span class="nn">bar</span>
</span></span></code></pre></div><p>The design principles applied are that lazy imports are:</p>
<ul>
<li>Explicit</li>
<li>Local</li>
<li>Granular</li>
</ul>
<p>When parsing the Python code a proxy module is created. Only when the module is
actually used, the proxy is transparently replaced by the real package. You will
not always see improvements, so do not blindly replace all imports with lazy
imports.</p>
<p>PEP 810 also eliminates the need for <code>TYPE_CHECKING</code> guards. (See the <a href="https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING">typing
docs</a>, in
short: importing a module that is expensive and only contains types used for
type checking in an &ldquo;<code>if TYPE_CHECKING:</code>&rdquo; block.) It also helps for faster test
discovery and collection, less memory usage, decrease cold start slowness in
e.g. AWS Lambda functions, CLI applications, etc.</p>
<p>Meta (with Cinder) saw a 70% startup time reduction and 40% memory savings.
PySide has a 35% startup improvement.</p>
<p>About CLI tools: when using lazy imports you might notice the difference when
using <code>--help</code>. There&rsquo;s no need to load all dependencies to just output the help
text of a tool.</p>
<p>Some notes:</p>
<ul>
<li>Import time side effects (e.g. logging configuration, DB connections) are also
delayed!</li>
<li>Type checkers need to be updated</li>
<li>Import errors move to first use (so in runtime, not at launch). Keep that in
mind when debugging</li>
<li>It&rsquo;s not always faster, so profile your application before migrating and see
where you can potentially benefit</li>
<li>Document your lazy imports!</li>
<li>You cannot do lazy imports in functions</li>
</ul>
<p>Circular imports are probably still a problem, but they just show up later.</p>
<p><a href="https://github.com/JacobCoffee/breadctl">Link to the repo for this talk</a></p>
<p><a href="https://fosdem.org/2026/schedule/event/HAAABD-the_bakery_how_pep810_sped_up_my_bread_operations_business/">Link to the conference page</a></p>
<h2 id="modern-python-monorepo-with-uv-workspaces-prek-and-shared-libraries--jarek-potiuk">Modern Python monorepo with <code>uv</code>, <code>workspaces</code>, <code>prek</code> and shared libraries &mdash; Jarek Potiuk</h2>
<p>Jarek is, besides his other roles, the number 1 Apache Airflow contributor. The
<a href="https://github.com/apache/airflow">Apache Airflow repo</a> is the monorepo he
talks about today. There is also a series of blog posts about this topic: see
<a href="https://medium.com/apache-airflow/modern-python-monorepo-for-apache-airflow-part-1-1fe84863e1e1">part 1</a>,
which links to the other parts.</p>
<p>Airflow drove early requirements for
<a href="https://docs.astral.sh/uv/concepts/projects/workspaces/">uv workspaces</a>. They now
manage 120+ distributions seamlessly with it. It allows them to combine
distributions to work together in a workspace. Also used to import from one
distribution in another one.</p>
<p>The project shares a single virtual environment used by <code>uv</code> in root of project.
If you run &ldquo;<code>uv sync</code>&rdquo; from the top level you get everything. If you run it in a
subdirectory (e.g. <code>airflow-core</code>) you only get what is needed for that
distribution.</p>
<p>Benefits of the <code>uv</code> workspaces:</p>
<ul>
<li>Isolated</li>
<li>Explicit</li>
<li>Flexible</li>
</ul>
<p><a href="https://hatch.pypa.io/1.12/">Hatch</a> has (or will have, at the time of writing)
largely compatible workspaces.</p>
<p>However <a href="https://pre-commit.com/">pre-commit</a> became a bottleneck. They needed
to run 170+ pre commit hooks <strong>on every commit</strong>.
<a href="https://github.com/j178/prek">Prek</a> is drop-in replacement for pre-commit and
works fantastic. It is optimized for speed and monorepos.</p>
<p>Airflow uses symlinked shares libraries (where a shared lib is also a
distribution). The Hatchling build backend needs to replace links with physical
copies during packaging. They use Prek to maintain consistency.</p>
<p><code>uv sync</code> detects conflicts between merged requirements files and Prek hooks
enforce relative imports in shared code to prevent cross coupling issues (IIRC)</p>
<p><a href="https://fosdem.org/2026/schedule/event/WE7NHM-modern-python-monorepo-apache-airflow/">Link to the conference page</a></p>
<h2 id="pyinfra-because-your-infrastructure-deserves-real-code-in-python-not-yaml-soup--loïc-wowi42-tosser">PyInfra: Because Your Infrastructure Deserves Real Code in Python, Not YAML Soup &mdash; Loïc &ldquo;wowi42&rdquo; Tosser</h2>
<p>Loïc is a Frenchmen (which, as he himself states, means he <strong>must</strong> have
opinions) and not a YAML fan to put it mildly. That is: YAML as a programming
language, e.g. how it is used in <a href="https://github.com/ansible/ansible">Ansible</a>.</p>
<figure><img src="/images/fosdem2026_loic_tosser.jpg"
    alt="Photo of Loïc Tosser showing a complex Ansible task in YAML"><figcaption>
      <p>Loïc Tosser demonstrating what happens when you ask a config file to be a programming language</p>
    </figcaption>
</figure>

<p><a href="https://pyinfra.com/">PyInfra</a> is an infrastructure as code library to write
Python code which is then translated to shell scripts to run on the target
hosts. So, in contrast to Ansible, you do not need Python on the target. The
target machine only needs SSH and a POSIX shell. You can also configure Docker
containers with PyInfra.</p>
<blockquote>
<p>If it has SSH, PyInfra can talk to it.</p></blockquote>
<p>PyInfra has idempotent operations and built-in diff checking. Declarative
infrastructure with actual code and not YAML. You can use inventory from
Terraform, Coolify or any API.</p>
<p>You can leverage the entire Python packaging ecosystem. Slack integration? Just
use the right Python package.</p>
<p>PyInfra is not only a CLI tool, you can also use it as a library.</p>
<p>PyInfra is 10 times faster than Ansible, uses 70% less code, has proper code
reuse via <code>import</code> and proper loops instead of <code>with_items</code>. It can have actual
unit tests and can scale to thousands of servers. Also you no longer have error
messages stating that <q>the error appears to be in &hellip; <strong>but may be
elsewhere in the file</strong> &hellip;</q> (looking at you Ansible). PyInfra has
clear error messages without having to specify <code>-vvvv</code> and wading through
hundreds of lines of output.</p>
<p>The suggested migration path:</p>
<ul>
<li>Start small, one playbook at a time</li>
<li>Use your IDE for autocomplete and refactoring</li>
<li>Leverage Python&rsquo;s standard library and the ecosystem with all its packages</li>
<li>Sleep better because you don&rsquo;t have to debug at 3 AM.</li>
</ul>
<p>Is PyInfra production ready? Yes! It has a stable API, is already in use in
production, it&rsquo;s actively maintained and is MIT licensed (so no commercial
entity behind it to steer its direction).</p>
<p>You can get started today with a simple &ldquo;<code>pip install pyinfra</code>&rdquo;.</p>
<p><a href="https://fosdem.org/2026/schedule/event/VEQTLH-infrastructure-as-python/">Link to the conference page</a></p>
<p>(Note from me, Mark, I found Loïc a great speaker: he has lots of energy, is
funny and can transfer his enthusiasm to the room. If the topic interests you
and the video becomes available, I would recommend watching this talk as a great
sales pitch to get started with PyInfra.)</p>
<h2 id="ducks-to-the-rescue---etl-using-python-and-duckdb--marc-andré-lemburg">Ducks to the rescue - ETL using Python and DuckDB &mdash; Marc-André Lemburg</h2>
<p>ETL stands for Extract, Transform, Load. Nowadays we usually do Extract, Load,
Transform because databases are efficient in processing.</p>
<p>DuckDB is open source, in-process analytics data storage (OLAP). It is similar
to SQLite, but for OLAP workloads. It has great Python support and uses SQL as
standard query language. It&rsquo;s pip installable, column based
(<a href="https://arrow.apache.org/">Apache Arrow</a>). It&rsquo;s single writer but allows for
multiple readers, so it&rsquo;s not a distributed database.</p>
<p><a href="https://github.com/pola-rs/polars">Polars</a>&rsquo; streaming can help with processing
your data as a line-by-line stream so you don&rsquo;t have to load the whole file in
memory at once.</p>
<p>Example to load a CSV file into DuckDB extremely fast:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sql" data-lang="sql"><span class="line"><span class="cl"><span class="k">SELECT</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="k">FROM</span><span class="w"> </span><span class="n">read_csv</span><span class="p">(...)</span><span class="w">
</span></span></span></code></pre></div><p>You can load the data into staging tables first to prepare everything and not
mess up e.g. existing data. You can then transform data in DuckDB, e.g. filter
out unneeded and duplicate data, validate data, fill in missing data, convert
data types, etc. You can do the transforms in SQL. You can even use native
integrations to write to PostgreSQL, MySQL, etc. Or worst case stream to Python.</p>
<p>Guidelines:</p>
<ul>
<li>Know your queries, that is: know how your data is going to be used</li>
<li>Use the Pareto principle (80/20 rule): optimize for queries that are used
often</li>
<li>Keep a healthy balance between performance and space requirements (which are
often trade-offs)</li>
</ul>
<p>Huge datasets: use the <a href="https://github.com/duckdb/ducklake">DuckLake</a> extension.</p>
<p>To get started: &ldquo;<code>uv add duckdb</code>&rdquo;. Do some experiments and see how it works for
you.</p>
<p><a href="https://fosdem.org/2026/schedule/event/S7RELZ-ducks_to_the_rescue_-_etl_using_python_and_duckdb/">Link to the conference page</a></p>
<h2 id="my-takeaways">My takeaways</h2>
<ul>
<li>Yes, FOSDEM is crowded and you may not be able to get into every talk you want
to see in person, but it&rsquo;s still nice to be there. It&rsquo;s well organised and
there&rsquo;s a friendly atmosphere. Lots of interesting projects to see and people
to talk to. And it&rsquo;s convenient if you want to sponsor your favorite projects
by buying some merchandise.</li>
<li>It&rsquo;s worth investigating signing Docker images (in the right way) further.</li>
<li>Lazy imports look useful! Once Python 3.15 lands it&rsquo;s worth doing profiling on
the projects I work on to see if we can use those to speed things up on
startup and save some memory.</li>
<li>At work we recently decided to go for a monorepo for a project. I want to see
if/how <code>uv</code> workspaces and <code>prek</code> can help us.</li>
<li>I&rsquo;ve written a bunch of Ansible roles to configure my humble homelab and
laptop. Perhaps it&rsquo;s time to switch to PyInfra? It sounds promising and might
be worth the investment of migrating to.</li>
</ul>
<h2 id="about-the-trip">About the trip</h2>
<p><figure class="float-right"><img src="/images/fosdem2026_atomium.jpg"
    alt="Picture of the Atomium at night" width="200px"><figcaption>
      <p>The <a href="https://en.wikipedia.org/wiki/Atomium">Atomium</a> at night</p>
    </figcaption>
</figure>

Last year I drove to Brussels on Friday and stayed at the city center in the
<a href="https://cityboxhotels.com/hotels/brussels/citybox-brussels">Citybox Brussels
hotel</a> for one
night, since I had to be home on Sunday. The upside: it was just a short (15
minute?) tram ride to the FOSDEM location. Unfortunately it did mean I had to
drive home that evening.</p>
<p>This year I had more time, so I booked a room at
<a href="https://www.falkohotel.be/">Falko Hotel</a> for two nights. It&rsquo;s about a 20&ndash;30
minute drive (depending on traffic) to the <a href="https://www.interparking.be/en/parkings/brussels/toison-d-or/">parking
garage</a> I used.
And from there about 20 minutes with pubic transport to the Université libre de
Bruxelles.</p>
<p>Staying another night meant I had more time for sightseeing, had the time to
write this post from my notes and could drive home well rested the next day.</p>
<p>As for tech: besides a phone and laptop, I also brought along two items that
made the trip more comfortable:</p>
<ul>
<li>A <a href="https://mojogear.eu/en/products/mojogear-mini-evo-10-000-mah-power-bank-22-5w">MOJOGEAR Mini
Evo</a>
powerbank to give my phone extra juice to make it through the day. With 10.000
mAh and up to 22.5W of power it&rsquo;s more than sufficient for a day at a
conference. With its small size and less than 175 grams in weight, it&rsquo;s also
easy to carry around.</li>
<li>A <a href="https://www.gl-inet.com/products/gl-sft1200/">GL.iNet Opal (GL-SFT1200)</a>
travel router. I plug it in, hook it up to the hotel internet, start a VPN
connection and all my other devices automatically connect to it and can use
the internet without the hotel snooping on my traffic. (Not that I have an
indication that my hotel would do that, but theoretically they could if I
would not use a VPN.)</li>
</ul>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Full circle: rediscovering my joy in software engineering]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2025/06/08/full-circle-rediscovering-my-joy-in-software-engineering/" type="text/html" />
    <id>https://markvanlent.dev/2025/06/08/full-circle-rediscovering-my-joy-in-software-engineering/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="development" />
    <category term="go" />
    <category term="personal" />
    <category term="python" />
    
    <updated>2025-06-08T16:08:56Z</updated>
    <published>2025-06-08T00:00:00Z</published>
    <content type="html"><![CDATA[<p>This is a different kind of post than I normally write here. Most other posts
are about a problem I ran into or a conference I visited. This time is more a
story telling post. I though it would be nice to have a sort of summary of what
kind of work I have been doing for the last decade.</p>
<p>For years I&rsquo;ve have had a page <a href="/about/me">about me</a> which tells a bit of my
background and past and present jobs. In this post I would like to zoom in on,
roughly, the last thirteen years and write a bit about how my work and interests
evolved.</p>
<h2 id="software-developer">Software developer</h2>
<p>Let&rsquo;s start with my job at <a href="https://www.fox-it.com/">Fox-IT</a>, the company I
joined as a
<a href="https://www.djangoproject.com/">Django</a>/<a href="https://www.python.org/">Python</a>
developer mid 2013. I started building a portal for the customers of their
managed <a href="https://en.wikipedia.org/wiki/Security_operations_center">SOC</a> service
and I figured that&mdash;once this portal was done&mdash;I would find something else
within Fox to work on. However, this project only grew in functionality and even
became the tool used by the SOC analysts to get alerted on new incidents and
start their analysis in. I think all in all I spent the first four to five years
at Fox working on this platform on a daily basis.</p>
<p>Meanwile, due to organisational changes, my team was supposed to become less
dependant on a different business unit and as a ressult we would need to manage
our own infrastructure more. I&rsquo;ve always been interested in that kind of work,
so I started picking that up. And due to my background as a developer I wanted
to automate as much as possible. As a result my days started to become more an
more about creating and maintaining a testing environment. (I also have to admit
that messing around with physical servers was fun, especially initially.)</p>
<h2 id="infrastructure-developer">Infrastructure developer</h2>
<p>Slowly my work had become more about the infrastructure surrounding the product
we were developing, than writing code for the product itself. In hindsight I
think it was about 2018 when I was effectively no longer a developer on the
product. Instead of implementing features I was using
<a href="https://www.packer.io/">Packer</a> to create template for machine images, writing
<a href="https://www.terraform.io/">Terraform</a> to use these images (and managing other
infrastructure) and using
<a href="https://en.wikipedia.org/wiki/Ansible_(software)">Ansible</a> to help deploy the
product, et cetera.</p>
<p>Did I overengineer it? Probably. Did I like it and have I learned a lot from it?
Definitely!</p>
<p>Because I dislike the term &ldquo;DevOps engineer&rdquo;, I decided to call myself an
&ldquo;infrastructure developer&rdquo;. (Though I have to admit that on my CV and social
media profiles I used the title DevOps engineer when I was applying for a new
job since&mdash;whether I liked it or not&mdash;that is a more familiar term.) Looking
back to this now, there was also a clue hidden in there, but I&rsquo;ll get back to
that.</p>
<p>Since I was the only person doing this kind of work for my team, the
organisation figured it would be good to pair up with a colleague who was doing
similar kind of work for a different team to have some redundancy. While in
practice this did not work that well (yes, we had a similar role, but the
platforms and infrastructure were too diverse), it did lead to a new
opportunity.</p>
<p>A different team needed an extra person to help create a self-service, on-demand
environment to perform digital forensic investigations in. And given my interest
in cloud infrastructure (AWS) and my experience, I was a nice fit. I really
liked that project, learned a lot and enjoyed myself. And I wanted to do more of
this kind of work. However, this meant I had to look elsewhere.</p>
<h2 id="mission-critical-engineer">Mission Critical Engineer</h2>
<p>And that is how I ended up at <a href="https://schubergphilis.com/">Schuberg Philis</a> as
a mission critical engineer. As I had expected, this role is heavily operations
focussed. In my case, I helped to <a href="https://schubergphilis.com/how-we-work/plan-build-run">plan, build and
run</a> AWS infrastructure
for one of our customers. Unfortunately it was mostly &ldquo;run&rdquo; though. Don&rsquo;t get me
wrong, I definitely leveled up my AWS skills and genuinely enjoyed my time in
that team. But&hellip;</p>
<p>At a certain point in time our customer wanted to add an existing application to
their mission critical environment. Since it was a Python application (a Lambda
actually) I volunteered to help improve the application so it would be in a
state where we felt comfortable to offer 100% uptime and 24/7 support. Only then
did I realise what I was missing: software development and the joy that it gave
me.</p>
<p>Sure, I had been doing operations related work in the past, but in hindsight
most of the time I was still developing. Not building an application perhaps,
but infrastructure. I had always been more of a developer than an administrator.
I guess that was also why I liked the title &ldquo;infrastructure <strong>developer</strong>&rdquo;.</p>
<p>Lucky for me I was able to switch to a different team.</p>
<h2 id="mission-critical-software-engineer">Mission Critical Software Engineer</h2>
<p>And that&rsquo;s how I ended up where I am today. Little over a year ago I switched to
a role where I can focus on writing software again. And we, as a group of
software engineers, are also responsible for running, monitoring and supporting
our own services. So thinking about infrastructure is still a (small) part of
the job.<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup> But the main chunk of work is software engineering.</p>
<p>One thing did change though. Where my previous development jobs had been Python
oriented, in this team we use <a href="https://go.dev/">Go</a> to write our services. This
was part of my plan: by joining a Go team, I could broaden my horizon by
learning a new language.</p>
<p>Go was not completely new to me. I had done a
<a href="/2018/06/27/devopsdays-amsterdam-2018-workshops/#go-for-ops--michael-hausenblas-red-hat">Go workshop in 2018</a>.
And I had also made an attempt to rewrite an internal Python command line
application in Go. However, I had not properly learned the language, let
alone work with it as part of my job.</p>
<p>I might write more about learning and working with Go in a future post, but that
is beyond the scope of this one. I do want to say I thouroughly enjoy being a
software engineer again and learning how to do things in a differeny language.</p>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>We also have a couple of people in our team who are responsible for
setting up and maintaining the infrastructure we are running our services
on.&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[PyGrunn 2024]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2024/05/17/pygrunn-2024/" type="text/html" />
    <id>https://markvanlent.dev/2024/05/17/pygrunn-2024/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="conference" />
    <category term="python" />
    
    <updated>2024-05-17T16:42:29Z</updated>
    <published>2024-05-17T00:00:00Z</published>
    <content type="html"><![CDATA[<p>Notes from my day at the 12th edition of PyGrunn.</p>
<p><a href="https://pygrunn.org/">PyGrunn</a> is a Python focussed, one day conference held in
Groningen<sup id="fnref:1"><a href="#fn:1" class="footnote-ref" role="doc-noteref">1</a></sup>. Or as the organizers more eloquently phrase this:</p>
<figure class="float-right"><img src="/images/pygrunn24_banner.jpg"
    alt="PyGrunn banner" width="150px">
</figure>

<blockquote>
<p>PyGrunn is the &ldquo;Python and friends&rdquo; developer conference with a local
footprint and global mindset. Firmly rooted in the open source culture, it
aims to provide the leaders in advanced internet technologies a platform to
inform, inspire and impress their peers.</p></blockquote>
<p>Before I start with my notes I want to give a shout-out to Reinout van Rees.
<a href="https://reinout.vanrees.org/weblog/tags/pygrunn.html">His (PyGrunn) summaries</a>
are excellent. I&rsquo;m always impressed by the quality of them and how little time
he needs to write them. Where I&rsquo;m only able to make notes and need to write them
out afterwards, Reinout has the summary (as a coherent story) ready before the
speaker has unhooked their laptop. So if you are interested in one of the talks
he has attended, head over
<a href="https://reinout.vanrees.org/weblog/tags/pygrunn.html">there</a> first.</p>
<h2 id="platform-engineering-python-perspective--andrii-mishkovskyi">Platform Engineering: Python Perspective &mdash; Andrii Mishkovskyi</h2>
<p>Why do platform engineering teams exist? We&rsquo;ve seen a &ldquo;shift left&rdquo; of
responsibilities towards the developer e.g. QA, operations (DevOps). But we
(software developers) are trained to write code, not to e.g. monitor it.</p>
<p>So where does a humble developer start? There is an abundance of choices on the
tools to use. What do you pick for package management? Or continuous
integration? Or deployment, code quality, observability, etc.  This freedom of
choice comes with a cost. We start with discussing which tools to use instead of
the problem the customer is facing. And depending on which choice we make, the
result may make reasoning about the software more complex.</p>
<p>So what should you do?</p>
<p>Andrii broke it down into three parts:</p>
<ul>
<li>You observe (i.e. you read a lot to get the lay of the land)</li>
<li>You execute (this is the actual software development part)</li>
<li>And then you collect feedback (you reach out to teams, you observe how their work has changed)</li>
</ul>
<p>Some of the platform engineering team deliverables:</p>
<ul>
<li>Documentation</li>
<li>Self service portal (tip: look into <a href="https://backstage.io/">Backstage</a>)</li>
<li>Boilerplates</li>
<li>APIs</li>
</ul>
<p>The &ldquo;consumers&rdquo; are developers, compliance teams and other platform teams. The
goal is to:</p>
<ul>
<li>Have reasonable defaults</li>
<li>Remove redundancy</li>
<li>Keep things consistent</li>
</ul>
<p>To get a feel for the scale of things: at Andrii&rsquo;s company there are over 160
services, developed on by 300+ developers in 500+ repositories. They total up
to 3 million lines of code and 6 million lines of YAML.</p>
<p>Templates (boilerplates) provide a paved path. The goal is to have teams spend
as little time as possible when starting a project. The templates use a certain
set tools that are supported. Teams are free to use different tools though if
they want to.</p>
<p>Andrii uses <a href="https://github.com/cookiecutter/cookiecutter">cookiecutter</a>
templates at work. It&rsquo;s not his choice perse, but it&rsquo;s what was already in place
when he joined. There are currently three templates in use. They have evolved
over time. For example in the last nine years, over 800 changes have been made
(that is more than one change per week on average).</p>
<p>The evolution has left its marks: the templates have a lot of code duplication.
There is also code specific to a tiny minority of projects (only about 8 out of
the 500+). This means that most projects start with deleting code after using
the boilerplate.</p>
<p>And that also relates the downside of using cookiecutter the way they do. Instead of
just using it to get started with a project, they also use it incrementally. But
cookiecutter does not have versioning built-in. So if you remove a file and then
reapply cookiecutter again, the file is happily created again.</p>
<p>While Andrii is aware of the issues (and thinks
<a href="https://github.com/copier-org/copier">copier</a> might be a better alternative),
it is hard to replace practices that are already in use. And cookiecutter is
great for getting started with a project.</p>
<p>With regard to the standardization, they use the following in the templates:</p>
<ul>
<li><code>pyproject.toml</code> for all projects</li>
<li>Poetry (instead of setuptools + pip-tools)</li>
<li>sprinkle <a href="https://github.com/renovatebot/renovate">Renovate</a> on top for
automatically updating dependencies</li>
</ul>
<p>As it currently stands, there are 99 project that migrated to <code>pyproject.toml</code> and
Poetry in the last 2 years. It makes sense because it takes time for projects to
transition. Plus they are not <em>required</em> to migrate; again: the template are there
to help, not to limit the users. Renovate has been adopted more quickly.</p>
<p>Migrating from e.g. <code>pkg_resources</code> to <code>pkgutil</code> or
<a href="https://peps.python.org/pep-0420/">PEP-420</a> for namespaces packages is hard.
Templates can help with that. However, cookiecutter does not actually <em>manage</em>
files. So if a file has been removed from a template, rerunning cookiecutter
does not remove the file from the project. So that requires some care.</p>
<p>When they migrated from a monolithic application to a microservices
architecture, authentication/authorization became an issue. There was no
visibility for teams, no transparency and no accountability. To combat this, they
created an API where applications can declare the required access and scopes
in a YAML file. Maintainers can approve this access. And this also allows for
CI/CD to check access. A CLI tool can verify the validity of the YAML and check
if access is actually approved.</p>
<h2 id="securing-your-team-solution-and-company-to-embrace-chaos--edzo-botjes">Securing your team, solution and company to embrace chaos &mdash; Edzo Botjes</h2>
<p>Edzo started by sharing a link to
<a href="https://docs.google.com/presentation/d/1j7HgfiZXd51QdPHD1_yptzbxx8s9O2BNsA0dCG-Ajes/edit#slide=id.g2dd9784acd0_0_108">his slides</a>
and warned us that it usually takes two weeks for people to digest the contents
of his talk. He would overload us with information. And that&rsquo;s where I decided
to solely concentrate on the talk and not on note taking.</p>
<blockquote>
<p>Nobody knows what they are doing. Embrace this.</p></blockquote>
<p>Even a simple, deterministic system like a <a href="https://en.wikipedia.org/wiki/Double_pendulum">double
pendulum</a> has a nondeterministic
outcome. In other (my) words: the whole world is in chaos and unpredictable.</p>
<p>When presented with information everyone processes it differently and
understands something else (also see viral phenomenon of <a href="https://en.wikipedia.org/wiki/The_dress">the dress</a>).</p>
<p><img src="/images/pygrunn24_edzo_botjes.png" alt="Different perspectives: one person sees a circle, another a rectangle"></p>
<p>What worked for Edzo was to embrace chaos. To do this he let go of his desire to
control and trying to create a predictable outcome.</p>
<h2 id="descriptors-decoding-the-magic--alex-dijkstra">Descriptors: Decoding the Magic &mdash; Alex Dijkstra</h2>
<p>Many people have used descriptors without them even being aware of it.</p>
<p>From the documentation:</p>

  <figure>

<blockquote cite="https://docs.python.org/3/howto/descriptor.html">
Descriptors let objects customize attribute lookup, storage, and deletion.
</blockquote>

  <figcaption>
    &mdash;<cite><a href="https://docs.python.org/3/howto/descriptor.html">Descriptor Guide</a></cite>
  </figcaption>
  </figure>


<p>You can view descriptors as reusable @properties. A descriptor implements the <code>__get__</code> and
<code>__set__</code> methods (and when needed <code>__delete__</code>).</p>
<p>Alex showed a bunch of examples. This is the template he showed to introduce
descriptors:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="k">class</span> <span class="nc">MyDescriptor</span><span class="p">:</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">def</span> <span class="fm">__get__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">obj</span><span class="p">,</span> <span class="n">owner</span><span class="p">):</span>
</span></span><span class="line"><span class="cl">        <span class="c1"># owner is the class to which the instance belongs</span>
</span></span><span class="line"><span class="cl">        <span class="k">return</span> <span class="n">obj</span><span class="o">.</span><span class="vm">__dict__</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">private_name</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">def</span> <span class="fm">__set__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">obj</span><span class="p">,</span> <span class="n">val</span><span class="p">):</span>
</span></span><span class="line"><span class="cl">        <span class="c1"># self is descriptor instance.</span>
</span></span><span class="line"><span class="cl">        <span class="n">obj</span><span class="o">.</span><span class="vm">__dict__</span><span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">private_name</span><span class="p">]</span> <span class="o">=</span> <span class="n">val</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">def</span> <span class="nf">__set_name__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">owner</span><span class="p">,</span> <span class="n">name</span><span class="p">):</span>
</span></span><span class="line"><span class="cl">        <span class="bp">self</span><span class="o">.</span><span class="n">public_name</span> <span class="o">=</span> <span class="n">name</span>
</span></span><span class="line"><span class="cl">        <span class="bp">self</span><span class="o">.</span><span class="n">private_name</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">&#39;_</span><span class="si">{</span><span class="n">name</span><span class="si">}</span><span class="s1">&#39;</span>
</span></span></code></pre></div><p>His example of how to use this:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="k">class</span> <span class="nc">MyClass</span><span class="p">:</span>
</span></span><span class="line"><span class="cl">    <span class="n">value</span> <span class="o">=</span> <span class="n">MyDescriptor</span><span class="p">()</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="n">myinstance</span> <span class="o">=</span> <span class="n">MyClass</span><span class="p">()</span>
</span></span><span class="line"><span class="cl"><span class="n">myinstance</span><span class="o">.</span><span class="n">value</span> <span class="o">=</span> <span class="mi">5</span>  <span class="c1"># MyDescriptor.__set__</span>
</span></span><span class="line"><span class="cl"><span class="n">myinstance</span><span class="o">.</span><span class="n">value</span>      <span class="c1"># MyDescriptor.__get__</span>
</span></span></code></pre></div><p>Using descriptors you can do things  when getting or setting the value. E.g. in a
class you can enforce that an attribute has a certain type.</p>
<p>The <code>__set_name__</code> method was introduced in Python 3.6. It is not needed to use this in all of your
descriptors, but also doesn&rsquo;t hurt.</p>
<p>Do you want to use descriptors all over the place? No. They can be useful: you
can create a clean APIs with them and this helps if the API is used frequently.
However, it does create some overhead and the code is a bit more complex.</p>
<p>Resources:</p>
<ul>
<li><a href="https://docs.python.org/3/howto/descriptor.html">https://docs.python.org/3/howto/descriptor.html</a></li>
<li>Luciano Ramalho&rsquo;s book
<a href="https://www.oreilly.com/library/view/fluent-python-2nd/9781492056348/">Fluent Python</a>
(Luciano also did a few talks about descriptors)</li>
</ul>
<h2 id="release-the-krakend--erik-jan-blanksma">Release the KrakenD &mdash; Erik-Jan Blanksma</h2>
<p><a href="https://www.krakend.io/">KrakenD</a> is an API gateway product. Erik-Jan likes it
so much he wanted to share his experience with it.</p>
<p>Projects can start out simple, with a monolith that is accessed via a web
client. Before you know it, there are several services and multiple types of
clients. The solution is to introduce an API Gateway in the middle. It can then
handle the incoming requests.</p>
<p>API Gateway in short:</p>
<ul>
<li>It sits between clients and backend (as a sort of portal).</li>
<li>It hides internal complexity of backend for the clients.</li>
<li>The gateway is a great place to introduce things like
authorization/authentication, logging, load balancing, caching, etc.</li>
</ul>
<p>KrakenD is one of the available API Gateways. It is open source, but there&rsquo;s
also an enterprise version with extra features. KrakenD is implemented in Go and
offers a bunch of features out of the box (monitoring, throttling, request and
response manipulation). KrakenD offers integrations with e.g. tools (Jaeger,
Grafana, the Elastic stack), authorization/authentication services and queues
(RabbitMQ).</p>
<p>KrakenD is a stateless process, so no database is needed. It takes JSON (or
YAML) config. It can combine the results of multiple backends API calls and
return it as a single response.</p>
<p>Tips:</p>
<ul>
<li>Use the KrakenDesigner (makes it easy to explore what&rsquo;s possible). Note that
you do not want to use this in production.</li>
<li>You&rsquo;ll want to split up the configuration when it grows. By using flexible
configuration you can combine so called partials, settings and templates.</li>
<li>KrakenD can check and even audit you configuration.</li>
<li>Since the configuration is in JSON, you can generate
OpenAPI.json from the KrakenD config. You can use this for Swagger.</li>
</ul>
<p>KrakenD is a great tool to manage your APIs. It is lightweight, fast, easy to
configure and has lots of functions out of the box. It is versatile and
extensible. By using it you can make your architecture more agile.</p>
<p>However, it also means that you will have to manage the API Gateway
configuration. And a change in the configuration means you will have to restart
the process.</p>
<h2 id="general-tips">General tips</h2>
<p>Some general notes:</p>
<ul>
<li>Have a look at:
<ul>
<li><a href="https://docs.pydantic.dev/latest/">Pydantic</a></li>
<li><a href="https://github.com/aws/chalice">Chalice</a></li>
<li><a href="https://xata.io/">Xata</a></li>
</ul>
</li>
<li>It can be helpful use functional programming (e.g. using closures) instead of
by default using classes and methods.</li>
</ul>
<div class="footnotes" role="doc-endnotes">
<hr>
<ol>
<li id="fn:1">
<p>&ldquo;Grunn&rdquo; is what Groningen is called in the regional language&#160;<a href="#fnref:1" class="footnote-backref" role="doc-backlink">&#x21a9;&#xfe0e;</a></p>
</li>
</ol>
</div>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Podcasts I listen to — 2019 edition]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2019/10/13/podcasts-i-listen-to-2019-edition/" type="text/html" />
    <id>https://markvanlent.dev/2019/10/13/podcasts-i-listen-to-2019-edition/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="aws" />
    <category term="devops" />
    <category term="observability" />
    <category term="podcast" />
    <category term="python" />
    <category term="security" />
    
    <updated>2021-11-09T20:09:33Z</updated>
    <published>2019-10-13T17:08:00Z</published>
    <content type="html"><![CDATA[<p>Most of the time I use my commute to listen to podcasts. Because they reflect my
interests at this point in time, I thought it would be nice to share my current
list.</p>
<p>The last time I posted my selection of podcasts was back in
<a href="/2012/11/12/podcasts-i-listen-to/">2012</a>. Since then, a lot has changed:</p>
<ul>
<li>Only two out of the six podcasts on the 2012 list are still in my feed. But
there are quite a few new podcasts to compensate.</li>
<li>My commute has reduced from six hours/week to about four.</li>
<li>Out of necessity I no longer listen to all episodes of all podcasts&mdash;I just
do not have enough time to listen to them. But all podcasts in the list below
have interesting episodes, so unsubscribing from them is also not an option.
(I know, first world problems&hellip;)</li>
</ul>
<p>My podcast feed with the reason why I listen to each show:</p>
<ul>
<li><a href="https://www.lastweekinaws.com/">AWS Morning brief</a>: I use this podcast to
quickly get up to speed on AWS related developments. Oh, and I like the snarky
way Corey Quinn summarises the news.</li>
<li><a href="https://www.redhat.com/en/command-line-heroes">Command line heroes</a>: a fun
and educational podcast about topics that interest me as a developer. For
example season 2 was focussed open source development and season 3 was all
about (the history of) programming languages.</li>
<li><a href="https://darknetdiaries.com/">Darknet Diaries</a>: interesting stories about
<q>the dark side of the internet.</q> I love the stories and the way
Jack Rhysider tells them.</li>
<li><a href="https://freakonomics.com/">Freakonomics Radio</a>: a completely different show
because it is not technology focussed. Stephen Dubner discusses topic I often
don&rsquo;t know much about or have not thought about much. I think the episodes are
not only educational but also fun to listen to.</li>
<li><a href="https://www.heavybit.com/library/podcasts/o11ycast/">O11ycast</a>: this one I
recently discovered. Since I&rsquo;m interested in the subject of observability,
this podcast gives me new ideas to think about.</li>
<li><a href="https://www.realworlddevops.com/">Real World DevOps</a>: I really like Mike
Julian&rsquo;s interviews with people from the tech industry about DevOps related
topics. Unfortunately there has not been a new episode for a few months; I
hope Mike will start podcasting again in the future.</li>
<li><a href="https://gimletmedia.com/shows/reply-all">Reply All</a>: a brilliant show. I had
lost track of the hosts, PJ Vogt and Alex Goldman, after they stopped with the
TLDR podcast. Unfortunately I did not search for them, otherwise I would have
found Reply All a lot sooner! I listen to this podcast primarily to get
amused; learning something from it is a nice side effect though.</li>
<li><a href="https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/">Screaming in the cloud</a>: another podcast
from Corey Quinn which I listen to, to learn more about cloud related topics
from the interviews on this show.</li>
<li><a href="https://twit.tv/shows/security-now">Security Now</a>:  I listen to this podcast
for the security related  news and an occasional deep dive into a technical
topic.</li>
<li><a href="https://shoptalkshow.com/">Shop Talk Show</a>: although these days I&rsquo;m more into
infrastructure and automation, I like listening to this show allow to keep in
touch with web development.</li>
<li><a href="https://talkpython.fm/">Talk Python to me</a>: more or less the same as with
Shop Talk Show: I like this podcast for its (Python) development related topics.</li>
<li><a href="https://www.tech45.eu/">Tech45</a>: a podcast I&rsquo;ve listened to for quite some years
now and one of the few shows I want to listen to every single episode of. I
enjoy the way the panel discusses the (tech related) news each week.
Informative and &ldquo;gezellig.&rdquo;</li>
<li><a href="https://soundcloud.com/user-98066669">The privacy, security &amp; OSINT show</a>:
the most recent addition to my list. I have only listened to a few shows yet,
but the topics seem interesting. We&rsquo;ll see if this one sticks or not.</li>
</ul>
<p>And there you have it: my list of podcasts of October 2019. (Also available in
<a href="/files/podcasts-2019.opml">OPML format</a>.)</p>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Open tabs]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2018/05/12/open-tabs/" type="text/html" />
    <id>https://markvanlent.dev/2018/05/12/open-tabs/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="backups" />
    <category term="devops" />
    <category term="git" />
    <category term="python" />
    <category term="security" />
    <category term="tabs" />
    <category term="terraform" />
    
    <updated>2025-09-13T21:07:32Z</updated>
    <published>2018-05-12T00:00:00Z</published>
    <content type="html"><![CDATA[<p>Currently I have about 30 tabs open in the browser on my phone. Quite
a bunch of them I have open because I want to read the article in the
future, already have read the article and want to reread or act on it,
or a combination of the above. In this article I list the open tabs
(and some notes) so I can close them on my phone, but still have a
reference to them.</p>
<h2 id="development">Development</h2>
<dl>
<dt><a href="https://blog.scottnonnenberg.com/better-git-configuration/">Better Git configuration</a></dt>
<dd>Some tips from Scott Nonnenberg to improve your Git configuration.</dd>
<dt><a href="https://jacobian.org/2018/feb/21/python-environment-2018/">My Python Development Environment, 2018 Edition</a></dt>
<dd>A good description by Jacob Kaplan-Moss of how he uses
<a href="https://github.com/pyenv/pyenv">pyenv</a>,
<a href="https://pipenv.pypa.io/en/latest/">pipenv</a> and
<a href="https://github.com/mitsuhiko/pipsi">pipsi</a> for Python development.</dd>
</dl>
<h2 id="operations">Operations</h2>
<dl>
<dt><a href="https://borgbackup.readthedocs.io/en/stable/">BorgBackup documentation</a></dt>
<dd>Something I want to play around with&mdash;and perhaps use&mdash;to make
backups.</dd>
<dt><a href="https://www.opsschool.org/">Ops School Curriculum</a></dt>
<dd>A very comprehensive resource to learn to be an operations engineer.</dd>
<dt><a href="https://www.serverlessops.io/blog/serverless-ops-what-do-we-do-when-the-server-goes-away">Serverless Ops: What do we do when the server goes away?</a></dt>
<dd>Tom McLaughlin writes about the changing role of DevOps/Operations
engineers in a &lsquo;serverless&rsquo; world.</dd>
<dt><a href="https://news.ycombinator.com/item?id=12672797">Ask HN: How do you back up your site hosted on a VPS such as Digital Ocean?</a></dt>
<dd>A bunch of comments with suggestions on how to arrange backups for a
VPS. (I need some inspiration for my own VPS.)</dd>
<dt><a href="https://steemit.com/technology/@taoteh1221/securely-using-amazon-s3-buckets-for-server-backups">Securely Using Amazon S3 Buckets For Server Backups</a></dt>
<dd>See above; this is one of the candidates.</dd>
<dt><a href="https://github.com/kahun/awesome-sysadmin/blob/master/README.md">Awesome Sysadmin</a></dt>
<dd><q>A curated list of amazingly awesome open source sysadmin resources.</q></dd>
</dl>
<h2 id="security">Security</h2>
<dl>
<dt><a href="https://dev-sec.io/">Automatic Server Hardening</a></dt>
<dd>Server hardening tips plus Chef, Puppet and Ansible modules. (Source:
<a href="https://ma.ttias.be/cronweekly/issue-94/">Cron weekly, issue 94</a>)</dd>
<dt><a href="https://decentsecurity.com/">Decent Security</a></dt>
<dd>Information on how to secure your devices (Windows, routers).</dd>
</dl>
<h2 id="devops">DevOps</h2>
<dl>
<dt><a href="https://github.com/chris-short/DevOps-README.md">DevOps README.md</a></dt>
<dd><q>A curated list of things to read to level up your DevOps skills and
knowledge</q> by Chris Short. (Source: <a href="https://devopsish.com/043/">DevOps&rsquo;ish, issue 043</a>)</dd>
<dt><a href="https://copyconstruct.medium.com/monitoring-and-observability-8417d1952e1c">Monitoring and Observability</a></dt>
<dd>A great post by Cindy Sridharan explaining the difference between
monitoring and observability.</dd>
<dt><a href="https://www.contino.io/insights/a-model-for-scaling-terraform-workflows-in-a-large-complex-organization">A Model for Scaling Terraform Workflows in a Large, Complex Organization</a></dt>
<dd>An article by Ryan Lockard and Hibri Marzook about scaling your Terraform working practices.</dd>
<dt><a href="https://mybinder-sre.readthedocs.io/en/latest/">Site Reliability Guide for mybinder.org</a></dt>
<dd>This might contain useful information about how mybinder.org sets
things up and how to write this kind of documentation.</dd>
<dt><a href="https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/">Terraform, VPC, and why you want a tfstate file per env</a></dt>
<dd>Another Terraform article, this time by Charity Majors.</dd>
<dt><a href="https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1">Testing in Production, the safe way</a></dt>
<dd>Lots of information in this article by Cindy Sridharan.</dd>
<dt><a href="https://medium.com/statics-and-dynamics/working-with-terraform-10-months-in-c15ade10c9b9">Working with Terraform: 10 Months In</a></dt>
<dd>Perhaps this article by J.D. Hollis might save me some headache (if I get around to read it in time :) ).</dd>
</dl>
<h2 id="miscellaneous">Miscellaneous</h2>
<dl>
<dt><a href="https://www.goodreads.com/book/show/27833670-dark-matter">Dark Matter</a></dt>
<dd>A book recommendation that I still need to check out. This was the
first link that popped up when I Googled the title.</dd>
<dt><a href="https://engineer.john-whittington.co.uk/2016/11/raspberry-pi-data-logger-influxdb-grafana/">Raspberry Pi Data Logger with InfluxDB and Grafana</a></dt>
<dd>An article by John Whittington as input for my (almost dead) side
project to collect and graph data from my smart meter.</dd>
</dl>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Font Awesome to PNG]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2013/10/27/font-awesome-to-png/" type="text/html" />
    <id>https://markvanlent.dev/2013/10/27/font-awesome-to-png/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="development" />
    <category term="icons" />
    <category term="python" />
    
    <updated>2021-08-20T20:23:14Z</updated>
    <published>2013-10-27T17:09:00Z</published>
    <content type="html"><![CDATA[<p>A site I&rsquo;m working on uses
<a href="https://fontawesome.com/">Font Awesome</a>. Font Awesome is an iconic font
designed for use with
<a href="https://getbootstrap.com/">Twitter&rsquo;s Bootstrap</a> and
currently (at version 4.0.0) includes 370 icons. It is an easy to use and
nice icon font. But I needed <code>PNG</code> files of the icons so I could use
the same icons in a different system.</p>
<p>Enter
<a href="https://github.com/Pythonity/font-awesome-to-png">Font Awesome to PNG</a>. It
is a Python script written by Michał Wojciechowski that allows you to do
exactly that: extract the icons from the Font Awesome <code>TTF</code> file and
save them as <code>PNG</code> files.</p>
<p>One example of how I used it to get a blue version of the comment
icon:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">./font-awesome-to-png.py --color <span class="s2">&#34;#27a4cd&#34;</span> --size <span class="m">48</span> comment
</span></span></code></pre></div><p>The result is a nice <code>PNG</code>:</p>
<p><img src="/images/comment.png" alt="Comment icon" title="Comment icon"></p>
<p>A big thank you to Michał and everyone that contributed to this code.</p>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Glances]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2013/04/10/glances/" type="text/html" />
    <id>https://markvanlent.dev/2013/04/10/glances/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="devops" />
    <category term="python" />
    <category term="tools" />
    
    <updated>2021-08-20T20:23:14Z</updated>
    <published>2013-04-10T11:45:00Z</published>
    <content type="html"><![CDATA[<p>Since I keep forgetting the name of this monitoring tool, I decided to
create an article so I can jog my memory more easily.</p>
<p>To get some basic information about your system
<a href="https://en.wikipedia.org/wiki/Top_(software)">top</a> is a very useful
tool. But sometimes you need a little bit more.</p>
<p>If that is the case, you may want to check out
<a href="https://nicolargo.github.io/glances/">Glances</a>. Here&rsquo;s an example of
Glances in action on my virtual machine:</p>
<p><img src="/images/glances.png" alt="Glances example" title="Glances example"></p>
<p>Besides the basics (CPU usage, load, memory usage) it also
displays information like network usage and disk I/O. And incidentally
Glances is written in Python.</p>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Looking for a static blog engine? Try Acrylamid!]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2012/10/01/looking-for-static-blog-engine-try-acrylamid/" type="text/html" />
    <id>https://markvanlent.dev/2012/10/01/looking-for-static-blog-engine-try-acrylamid/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="acrylamid" />
    <category term="blog" />
    <category term="django" />
    <category term="plone" />
    <category term="python" />
    
    <updated>2021-08-19T13:13:50Z</updated>
    <published>2012-10-01T08:25:00Z</published>
    <content type="html"><![CDATA[<p>Several Pythonistas switched to a static blog this year. If you are
also looking into static blog engines, give
<a href="https://posativ.org/acrylamid/">Acrylamid</a> a go.</p>
<p>Examples of persons that went static are
e.g. <a href="https://web.archive.org/web/20120924023453/http://blog.aclark.net/yes-this-blog-is-now-powered-by-pelican.html">Alex Clark</a>,
<a href="https://web.archive.org/web/20120913015444/http://pydanny.com/my-new-blog.html">Daniel Greenfeld</a> and
<a href="https://ziade.org/2012/03/05/moving-to-pelican/">Tarek Ziadé</a>. What
these guys have in common is that they all use
<a href="https://blog.getpelican.com/">Pelican</a>.</p>
<p>You could follow their example and use Pelican&mdash;and it&rsquo;s probably a
good choice&mdash;but I recommend you also at least have a look at
Acrylamid. It is written in Python, quite easy to get up and running,
it offers all a blog needs (articles, tags, lists of articles, pages
and feeds) and the author quickly responds to issues, pull
requests and questions. You can write your content in (amongst others)
<a href="https://docutils.sourceforge.io/rst.html">reStructuredText</a> and
<a href="https://daringfireball.net/projects/markdown/">Markdown</a>. The
templates can be <a href="https://jinja.palletsprojects.com/">Jinja2</a> or
<a href="https://www.makotemplates.org/">Mako</a>.</p>
<p>Acrylamid is already great. But it is also under active development so
it will even get better!</p>
<p>Disclaimer: Since today I use Acrylamid for this blog (more about that
in the next article:
<a href="/2012/10/01/migrating-to-acrylamid/">migrating to Acrylamid</a>). I
also contributed some code to the project. So I might be a bit
biased&hellip;</p>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[Test sending emails while developing]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2009/10/03/test-sending-emails-while-developing/" type="text/html" />
    <id>https://markvanlent.dev/2009/10/03/test-sending-emails-while-developing/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="development" />
    <category term="django" />
    <category term="plone" />
    <category term="python" />
    <category term="testing" />
    <category term="tools" />
    
    <updated>2021-08-20T19:50:51Z</updated>
    <published>2009-10-03T12:14:00Z</published>
    <content type="html"><![CDATA[<p>I frequently have to send emails from web applications. But before I
deploy to a production environment, I want to make sure the mechanism
works and the right mails are constructed. Here&rsquo;s two ways to do that.</p>
<h2 id="monkey-patching-the-zope-mailhost">Monkey patching the Zope MailHost</h2>
<p>When developing a Zope based application, the
<a href="https://pypi.org/project/Products.PrintingMailHost/">Products.PrintingMailHost</a>
package can really help you out. By including this package in your
setup, the Zope <code>MailHost</code> class is patched so no actual emails are
sent. Instead the content of email is printed to the standard output.</p>
<h2 id="smtp-server">SMTP server</h2>
<p>But when working on a Django application (or any other non Zope
project) there is no <code>MailHost</code> class that can be monkey
patched. Python&rsquo;s
<a href="https://docs.python.org/2.7/library/smtpd.html">smtpd module</a> to the
rescue. The first step is to configure the application to use
localhost as the SMTP server on a random port (say: 1025). Next, go to
the command line and type:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">python -m smtpd -n -c DebuggingServer localhost:1025
</span></span></code></pre></div><p>Just like the PrintingMailHost, this SMTP server prints the emails to
standard output. For more information see the
<a href="https://docs.djangoproject.com/en/dev//topics/email/#testing-e-mail-sending">&ldquo;Testing e-mail sending&rdquo; section</a>
in the Django documentation.</p>
<div class="note update">
  <div class="note_header">
    Update (2010-08-09)<span class="hidden">:</span>
  </div>
  <div class="note_body">
    For the developer with a deadline: install
<a href="https://pypi.org/project/django-extensions/">django-extensions</a>
which has a couple of useful extra features. One of them is the
<code>mail_debug</code> management command. This commands starts the same SMTP
debugging server, but you don&rsquo;t have to remember the right
incantation.
  </div>
</div>

<h2 id="http-server">HTTP server</h2>
<p>Somewhat related to this: if you want to test your application against
an HTTP server, you can use this command:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">python -m SimpleHTTPServer
</span></span></code></pre></div><p>The
<a href="https://docs.python.org/2.7/library/simplehttpserver.html#module-SimpleHTTPServer">SimpleHTTPServer module</a>
can be used to get server up-and-running quickly. It can also be a
simple way to, for instance, copy files from one machine to
another. By running the HTTP server in the directory containing the
files, you can access the files via your browser on another
machine. Safe? No. Convenient? Yes.</p>]]></content>
  </entry>
  <entry>
    <title type="html"><![CDATA[DateTime(&#39;2009/06/16&#39;) != DateTime(&#39;2009-06-16&#39;)]]></title>
    <link rel="alternate" href="https://markvanlent.dev/2009/06/17/datetime-2009-06-16-datetime-2009-06-16/" type="text/html" />
    <id>https://markvanlent.dev/2009/06/17/datetime-2009-06-16-datetime-2009-06-16/</id>
    <author>
      <name>map[name:Mark van Lent uri:https://markvanlent.dev/about/]</name>
    </author>
    <category term="plone" />
    <category term="python" />
    
    <updated>2021-07-16T07:25:56Z</updated>
    <published>2009-06-17T21:41:00Z</published>
    <content type="html"><![CDATA[<p>Be careful when parsing dates with the Zope <code>DateTime</code> module.</p>
<p>Recently I have been bitten by a small difference in the way the <code>DateTime</code>
module parses differently formatted date strings. Naively, you would think there
is no difference between the following two <code>DateTime</code> objects:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="o">&gt;&gt;&gt;</span> <span class="kn">from</span> <span class="nn">DateTime</span> <span class="kn">import</span> <span class="n">DateTime</span>
</span></span><span class="line"><span class="cl"><span class="o">&gt;&gt;&gt;</span> <span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009/06/16&#39;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009/06/16&#39;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="o">&gt;&gt;&gt;</span> <span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009-06-16&#39;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009/06/16&#39;</span><span class="p">)</span>
</span></span></code></pre></div><p>So far, so good. But:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="o">&gt;&gt;&gt;</span> <span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009/06/16&#39;</span><span class="p">)</span> <span class="o">==</span> <span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009-06-16&#39;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="kc">False</span>
</span></span></code></pre></div><p>Wait&hellip; What?!?</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="o">&gt;&gt;&gt;</span> <span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009/06/16&#39;</span><span class="p">)</span><span class="o">.</span><span class="n">rfc822</span><span class="p">()</span>
</span></span><span class="line"><span class="cl"><span class="s1">&#39;Tue, 16 Jun 2009 00:00:00 +0200&#39;</span>
</span></span><span class="line"><span class="cl"><span class="o">&gt;&gt;&gt;</span> <span class="n">DateTime</span><span class="p">(</span><span class="s1">&#39;2009-06-16&#39;</span><span class="p">)</span><span class="o">.</span><span class="n">rfc822</span><span class="p">()</span>
</span></span><span class="line"><span class="cl"><span class="s1">&#39;Tue, 16 Jun 2009 00:00:00 +0000&#39;</span>
</span></span></code></pre></div><p>As you can see, the way you format the date before feeding it to
<code>DateTime</code> matters: it <strong>may</strong> take the timezone into account. And this
can really mess up catalog queries if you use slashes when storing
your content and dashes for the query (or the other way around). I&rsquo;m
sure this is documented somewhere, but why search for documentation if
you &lsquo;know&rsquo; how it works. :)</p>]]></content>
  </entry>
</feed>
