Domácí

Informační Technologie

Informační Technologie
dnes

A Python project got hacked where malicious releases were directly uploaded to PyPI. I said on Mastodon that had the project used trusted publishing with digital attestations, then people using a pylock.toml file would have noticed something odd was going on thanks to the lock file including attestation data. That led to someone asking for a link to something to explain what I meant. I didn&apost have a link handy since it&aposs buried in 4 years and over 1,800 comments of discussion, so I figured I would write a blog post. 😁Since trusted publishing is a prerequisite for digital attestations, I&aposll cover that quickly. Basically you can set a project up on PyPI such that a continuous deployment (CD) system can upload a release to PyPI on your behalf. Since PyPI has to trust the CD system to do security right as it lets other sites upload to PyPI on your behalf, not every CD system out there is supported, but the big ones are and others get added as appropriate. Since this helps automate releases without exposing any keys to PyPI that someone might try to steal, it&aposs a great thing to make your life as a maintainer easier while doing something safer; win-win!Digital attestations are a way for a CD system to attest that a file came from that CD system. That&aposs handy as once you know where a file should come from you can verify that fact to make sure nothing nefarious is going on. And since this is just a thing to flip on, it&aposs extremely simple to do. If you use the official PyPA publish action for GitHub Actions, you get it automatically. For other CD systems it should be a copy-and-paste thing into your CD configuration.Now, the thing that pylock.toml records is who the publisher is for a file. Taking packaging as an example, you can look at the provenance for packaging-26.0-py3-none-any.whl that comes from the digital attestation and you will notice it tells you the file came from GitHub via the pypa/packaging repo, using the publish.yml workflow run in the "pypi" environment (which you can also see via the file&aposs details on PyPI):"publisher": { "environment": "pypi", "kind": "GitHub", "repository": "pypa/packaging", "workflow": "publish.yml" }So what can you do with this information once it&aposs recorded in your pylock.toml? Well, the publisher details are stored for each package in the lock file. That lets code check that any files listed in the lock file for that package version were published from the same publisher that PyPI or whatever index you&aposre using says the file came from. So if the lock file and index differ on where they say a file came from, something bad may have happened.What can you do as a person if you don&apost have code to check that things line up (which isn&apost a lot of code; the lock file should have the index server for the package, so you follow the index server API to get the digital attestation for each file and compare)? There are two things you can do manually. One, if you know that a project uses trusted publishing then that digital attestation details should be in the lock file (you can manually check by looking at the file details on PyPI); if it&aposs missing or changed to something suspicious then something bad may have happened. Two, when looking at a PR to update your lock file (and pylock.toml was designed to be human-readable), if digital attestation details suddenly disappear then something bad probably happened.So to summarize:Use trusted publishing if you&aposre a maintainerUpload digital attestations if you&aposre a maintainerUse lock files where appropriate (and I&aposm partial to pylock.toml 😁)If you&aposre using pylock.toml have code check the recorded attestations are consistentWhen reviewing lock file diffs (which you should do!), make sure the digital attestations don&apost look weird or were suddenly deletedA special thanks to William Woodruff, Facundo Tuesca, Dustin Ingram, and Donald Stufft for helping to make trusted publishers and digital attestations happen.

26.03.2026 03:59:58

Informační Technologie
1 den

If you've built documentation in the Python ecosystem, chances are you've used Martin Donath's work. His Material for MKDocs powers docs for FastAPI, uv, AWS, OpenAI, and tens of thousands of other projects. But when MKDocs 2.0 took a direction that would break Material and 300 ecosystem plugins, Martin went back to the drawing board. The result is Zensical: A new static site generator with a Rust core, differential builds in milliseconds instead of minutes, and a migration path designed to bring the whole community along.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code talkpython26</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading mb-4">Links from the show</h2> <div><strong>Guest</strong><br/> <strong>Martin Donath</strong>: <a href="https://github.com/squidfunk?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>Zensical</strong>: <a href="https://zensical.org?featured_on=talkpython" target="_blank" >zensical.org</a><br/> <strong>Material for MkDocs</strong>: <a href="https://squidfunk.github.io/mkdocs-material/?featured_on=talkpython" target="_blank" >squidfunk.github.io</a><br/> <strong>Getting Started</strong>: <a href="https://zensical.org/docs/get-started/?featured_on=talkpython" target="_blank" >zensical.org</a><br/> <strong>Github pages</strong>: <a href="https://docs.github.com/en/pages?featured_on=talkpython" target="_blank" >docs.github.com</a><br/> <strong>Cloudflare pages</strong>: <a href="https://pages.cloudflare.com?featured_on=talkpython" target="_blank" >pages.cloudflare.com</a><br/> <strong>Michaels Example</strong>: <a href="https://gist.github.com/mikeckennedy/f03686c4c4ce7ce88b41c6b91c3226cf?featured_on=talkpython" target="_blank" >gist.github.com</a><br/> <strong>Material for MkDocs</strong>: <a href="https://zensical.org/docs/setup/basics/#transition-from-mkdocs" target="_blank" >zensical.org</a><br/> <strong>gohugo.io/content-management/shortcodes</strong>: <a href="https://gohugo.io/content-management/shortcodes/?featured_on=talkpython" target="_blank" >gohugo.io</a><br/> <strong>a sense of size of the project</strong>: <a href="https://blobs.talkpython.fm/zensical-size.webp?cache_id=fe7bda" target="_blank" >blobs.talkpython.fm</a><br/> <strong>Zensical Spark</strong>: <a href="https://zensical.org/spark/?featured_on=talkpython" target="_blank" >zensical.org</a><br/> <br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=V1BvvIPUzes" target="_blank" >youtube.com</a><br/> <strong>Episode #542 deep-dive</strong>: <a href="https://talkpython.fm/episodes/show/542/zensical-a-modern-static-site-generator#takeaways-anchor" target="_blank" >talkpython.fm/542</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/542/zensical-a-modern-static-site-generator" target="_blank" >talkpython.fm</a><br/> <br/> <strong>Theme Song: Developer Rap</strong><br/> <strong>🥁 Served in a Flask 🎸</strong>: <a href="https://talkpython.fm/flasksong" target="_blank" >talkpython.fm/flasksong</a><br/> <br/> <strong>---== Don't be a stranger ==---</strong><br/> <strong>YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" ><i class="fa-brands fa-youtube"></i> youtube.com/@talkpython</a><br/> <br/> <strong>Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm</a><br/> <strong>Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i> @talkpython@fosstodon.org</a><br/> <strong>X.com</strong>: <a href="https://x.com/talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @talkpython</a><br/> <br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i> @mkennedy@fosstodon.org</a><br/> <strong>Michael on X.com</strong>: <a href="https://x.com/mkennedy?featured_on=talkpython" target="_blank" ><i class="fa-brands fa-twitter"></i> @mkennedy</a><br/></div>

25.03.2026 20:55:16

Informační Technologie
1 den
Informační Technologie
1 den

With PyCharm 2026.1, our core IDE experience continues to evolve as we‚Äôre bringing a broader set of professional-grade web tools to all users for free. Everyone, from beginners to backend-first developers, is getting access to a substantial set of JavaScript, TypeScript, and CSS features that were previously only available with a Pro subscription. React, JavaScript, TypeScript, and CSS support Leverage a comprehensive set of editing and formatting tools for modern web languages within PyCharm, including: Basic React support with code completion, component and attribute navigation, and React component and prop rename refactorings. Advanced import management: Enjoy automatic JavaScript and TypeScript imports as you work. Merge or remove unnecessary references via the Optimize imports feature. Get required imports automatically when you paste code into the editor. Enhanced styling: Access CSS-tailored code completion, inspections, and quick-fixes, and view any changes in real time via the built-in web preview. Smart editor behavior: Utilize smart keys, code vision inlay hints, and postfix code completions designed for web development. Navigation and code intelligence Finding your way around web projects is now even more efficient with tools that allow for: Pro-grade navigation: Use dedicated gutter icons for Jump to… actions, recursive calls, and TypeScript source mapping. Core web refactorings: Perform essential code changes with reliable Rename refactorings and actions (Introduce variable, Change signature, Move members, and more). Quality control: Maintain high code standards with professional-grade inspections, intentions, and quick-fixes. Code cleanup: Identify redundant code blocks through JavaScript and TypeScript duplicate detection. Frameworks and integrated tools With the added essential support for some of the most popular frontend frameworks and tools, you will have access to: Project initialization: Create new web projects quickly using the built-in Vite generator. Standard tooling: Standardize code quality with integrated support for Prettier, ESLint, TSLint, and StyleLint. Script management: Discover and execute NPM scripts directly from your package.json. Security: Check project dependencies for security vulnerabilities. We‚Äôre excited to bring these tried and true features to the core PyCharm experience for free! We‚Äôre certain these tools will help beginners, students, and hobbyists tackle real-world tasks within a single, powerful IDE. Best of all, core PyCharm can be used for both commercial and non-commercial projects, so it will grow with you as you move from learning to professional development.

25.03.2026 15:01:54

Informační Technologie
1 den

This tutorial shows you how to use Git to track changes in a project using just a few core commands and save clean snapshots of your work. If you’ve ever changed a file, broken something, and wished you could undo it, version control makes that possible. Git keeps a running history of your files so you can see what changed and when. In this guide, you’ll set up Git locally and use the core workflow from the terminal to track and record changes in a Python project. By the end, you’ll have a working Git repository with a recorded commit history you can inspect and manage: Commit History Displayed With git log In the next sections, you’ll create your own repository and begin building that history from scratch. Before you begin, you can download a Git cheat sheet to keep the core commands handy: Get Your Cheat Sheet: Click here to download your free Git cheat sheet and keep the core Git workflow, essential commands, and commit tips at your fingertips. Take the Quiz: Test your knowledge with our interactive “How to Use Git: A Beginner's Guide” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Use Git: A Beginner's Guide Test your knowledge of Git basics: initializing repos, staging files, committing snapshots, and managing your project history. How to Use Git: Prerequisites Before you start tracking your code with Git, make sure you have the right tools in place. This tutorial assumes that you’re comfortable working with the command line and have some basic Python knowledge. Here’s what you’ll need to get started: A terminal or command prompt Python 3.10 or higher installed on your system Note: Git and GitHub are often confused, but they’re not the same thing: Git is version control software that runs on your computer. It tracks changes to your files and manages your project’s history locally. GitHub is an online platform for hosting Git repositories. It provides collaboration tools that make sharing code, working with teams, and backing up your projects easier. You don’t need a GitHub account to use Git or follow this tutorial. Later, if you want to share your code with others or back it up online, you can optionally push your Git repository to platforms like GitHub, GitLab, or Bitbucket. To learn more about the differences between Git and GitHub, check out Introduction to Git and GitHub for Python Developers. With these prerequisites in place, you’re ready to begin setting up Git and tracking changes in your project. In the next step, you’ll install Git, prepare your existing Python files, and initialize your first repository. Step 1: Install Git and Prepare Your Project To start, you’ll check whether Git is installed on your system, prepare a simple project, and initialize a Git repository so you can begin tracking changes right away. Check Whether Git Is Already Installed Before you can start using Git, you need to make sure it’s installed on your machine. Chances are that Git is already present on your system. To check whether Git is installed, run this command: Shell $ git --version If this command displays a Git version, you’re good to go and can create a project directory. Otherwise, you need to install Git on your system before continuing. Install Git on Your System Luckily, Git provides installers for Windows, macOS, and Linux on its official website, offering a straightforward way to install Git on your machine. Because installation steps vary across operating systems, this guide links to the official documentation rather than reproducing those steps here. If you prefer a graphical interface, you can install a Git client such as GitHub Desktop, Sourcetree, or GitKraken. These tools install Git automatically during setup. Once installed, open your terminal and confirm that Git is available: Shell $ git --version git version 2.24.0.windows.2 Your Git version may appear slightly different from this example, depending on your operating system and when you installed Git. That’s perfectly fine. As long as Git is installed and the command runs successfully, you’ll be able to follow along with the rest of this tutorial without any issues. Create a Project Directory Read the full article at https://realpython.com/how-to-use-git/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

25.03.2026 14:00:00

Informační Technologie
1 den

About nine months ago I wrote an article about my quest to simplify my web development stack. How I went from SvelteKit on the frontend and Django on the backend, to an all-Django stack for a new project, using Alpine AJAX to enable partial page updates. I’ve now been using this new stack for a while, and my approach -as well as my opinion- has changed significantly. Let’s get into what works, what doesn’t, and where I ended up. A quick recap Alpine AJAX is a lightweight alternative to htmx, which you can use to enhance server-side rendered HTML with a few attributes, turning <a> and <form> tags into AJAX-powered versions. No more full page refreshes when you submit a form. The key mechanic: when a form has x-target="comments", Alpine AJAX submits the form via AJAX, finds the element with that ID in the response, and swaps it into the page. The server returns HTML, not JSON. In the original article I used django-template-partials (since merged into Django itself) to mark sections of a template as named partials using {% partialdef %}. Combined with a custom AlpineTemplateResponse the view could automatically return just the targeted partial when the request came from Alpine AJAX. Where I began: template partials Let’s say you have an article page with the article body parsed from Markdown, a like button, and a comment section. The template looks something like this: article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article_html|safe }} {% partialdef like_form inline %} <form method="post" id="like_form" x-target="like_form"> {% csrf_token %} <button type="submit" name="toggle-like"> {% if article.is_liked %}Unlike{% else %}Like{% endif %} </button> </form> {% endpartialdef %} {% partialdef comments inline %} <div id="comments"> {% for comment in article.comments.all %} <div>{{ comment.user }}: {{ comment.text }}</div> {% endfor %} <form method="post" x-target="comments"> {% csrf_token %} {{ comment_form }} <button type="submit" name="add-comment">Submit</button> </form> </div> {% endpartialdef %} </article> {% endblock %} Every form POSTs to the same article view, which handles all the actions in one big post method: views.pyclass ArticleView(View): def get_context(self, request, pk): article = get_object_or_404( Article.objects.prefetch_related("comments") .annotate_is_liked(request.user), pk=pk, ) return { "article": article, "article_html": markdown(article.body), "comment_form": CommentForm(), } def post(self, request, pk): context = self.get_context(request, pk) article = context["article"] if "toggle-like" in request.POST: if article.is_liked: article.unlike(request.user) article.is_liked = False else: article.like(request.user) article.is_liked = True return AlpineTemplateResponse(request, "article.html", context) if "add-comment" in request.POST: form = CommentForm(request.POST) if form.is_valid(): Comment.objects.create(article=article, user=request.user, ...) return AlpineTemplateResponse(request, "article.html", context) return redirect(article) def get(self, request, pk): context = self.get_context(request, pk) return AlpineTemplateResponse(request, "article.html", context) (Please note that I’m not saying this is the correct way to do things, simply that this is how I used to do it in this particular project.) The AlpineTemplateResponse from the original article takes care of automatically returning just the targeted partial when the request comes from Alpine AJAX. This works fine. I thought I was being smart to prevent template duplication this way, but there are two problems: The view does too much work. Every POST action calls get_context, which fetches everything: the article, the parsed Markdown body, the comments, the like state, the comment form. When the user clicks “Like”, we do all this work that we’ll never use in the partial template. The template partial means the response is small, but the server-side work is exactly the same as rendering the full page. The template is a mess. Those {% partialdef %} blocks scattered throughout the template make it noisy and hard to read. In a small example it’s fine, but in a real template with 200+ lines, it gets ugly fast. When doubt set in: switching to Jinja2 To be honest though, the real killer of my motivation while working on this project has been the Django Template Language. I’m sorry, but I just hate it. I have since 2009, and I still do. The syntax is bad enough, but then you have to constantly fight its limitations. The fact I can’t simply call a function is so incredibly annoying, and is causing way more boilerplate with tons of custom template tags and filters. So, switch to Jinja2, right? Except that template partials aren’t supported in combination with Jinja2. No more {% partialdef %}. Which means returning full page responses for AJAX requests, which isn’t exactly ideal. I did it anyway. I ripped out all the {% partialdef %} tags, migrated my templates to Jinja2, and my views just returned the full template for AJAX requests. Alpine AJAX is smart enough to extract the elements it needs by their IDs, and throws away the rest. This was simpler and I was much happier writing Jinja2 templates, but the wastefulness got worse. Before, the server at least returned a small response. Now it rendered the entire page and sent all of it over the wire, just for the browser to use a tiny piece of it. Of course it’s still better than an old-fashioned MPA, where every response is a full page refresh, but not by a lot. It was at this moment that I seriously thought about throwing the entire frontend away and rebuilding it in SvelteKit, with Django REST Framework returning JSON responses. But that seemed like a pretty big waste of effort, so instead I took a deep breath and thought about what I wanted: Jinja2 templates. Non-negotiable. Small, fast AJAX responses. No rendering the full page for a like toggle. No template duplication between the full page and the AJAX response. Simple views that only do the work they need to do. Template partials gave me #2 and #3, but not #1 or #4. Switching to Jinja2 and returning the full template for AJAX requests gave me #1 and #3, but not #2 or #4. I needed a different approach. Where I ended up: separate views with template includes The answer turned out to be straightforward, and the one I initially discarded as “too much boilerplate”: instead of one monolithic view handling all POST actions, split each action into its own view with its own URL. And instead of {% partialdef %}, use plain {% include %} tags to extract reusable template fragments. Let me show you. Here’s the simplified article template: article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article.body }} {% include "articles/_like_form.html" %} {% include "articles/_comments.html" %} </article> {% endblock %} Clean and readable. Each include is a self-contained fragment. And here’s the like form: _like_form.html<form method="post" action="{{ url('toggle-like', args=[article.id]) }}" id="like_form" x-target="like_form"> {{ csrf_input }} {% if article.is_liked %} <button type="submit">Unlike</button> {% else %} <button type="submit">Like</button> {% endif %} </form> And finally, the view: views.pyclass ToggleLikeView(LoginRequiredMixin, View): def post(self, request, pk): article = get_object_or_404( Article.objects.annotate_is_liked(request.user), pk=pk, ) if article.is_liked: article.unlike(request.user) article.is_liked = False article.like_count -= 1 else: article.like(request.user) article.is_liked = True article.like_count += 1 if is_alpine(request): return TemplateResponse( request, "articles/_like_form.html", {"article": article}, ) # For non-Alpine requests, we just redirect back return redirect(article) No comment queries. No form building. No Markdown parsing. Just the like state. The is_alpine check provides a redirect fallback for non-JavaScript POST requests, keeping things progressive. And the ArticleView itself becomes GET-only. No more branching on POST keys. No get_context method that fetches everything for every action. Each view does one thing. The trade-offs There are a few downsides to this approach that are worth mentioning. More templates. For the article page, I went from one template to several: the include fragments (_like_form.html, _comments.html) that are shared between the full page and the AJAX responses. When an action needs to update multiple elements on the page, you also end up with small response templates that combine the right includes. For example, if submitting a comment should update both the comment list and a comment count elsewhere on the page: _add_comment_response.html{% include "articles/_comments.html" %} {% include "articles/_engagement_counts.html" %} Trivial, but still a file you have to create and name. It’s also harder to make sure that the template fragment has access to the context it needs when included into the big template via {% include %}, compared to {% partialdef %} and one single view always rendering it. More views and URL routes. Each action gets its own view class and its own path() entry. For a page with likes, comments, and subscriptions, that’s three or four extra views. But here’s what I got in return: Actual performance improvement. Not just smaller responses, but less work on the server. Each view only queries what it needs. Jinja2. I’m using Jinja2 instead of the Django Template Language. I can call functions, I have proper expressions, and I don’t need custom template tags for basic things. This alone was worth the switch. Readable templates. The main article.html is short and shows the page structure at a glance. Each fragment is self-contained. No {% partialdef %} blocks scattered everywhere. Simple views. Each view does exactly one thing. Easy to understand, easy to test, easy to optimize. Conclusion I went through three stages: template partials with Django Template Language, full-page responses with Jinja2, and finally separated views with template includes. Each step solved a real problem with the previous approach. The pattern I’ve ended up with requires more files and views than I’d like, but each is simple and does one thing. It’s become easier to understand the flow of every action. My overall feelings on Django + Alpine AJAX have also changed. I still believe there are benefits to using a simplified tech stack and using hypermedia as the engine of state. Just return HTML instead of returning JSON to a JavaScript framework which then has to turn it into HTML. Conceptually it still makes sense to me. But the dream was to build a plain old Django application using simple views and simple templates, using old-fashioned MPA server-rendered pages. Sprinkle in a few Alpine AJAX attributes and magically your site gets SPA-like usability. And it simply hasn’t played out that way for me. Yes, you could do that, if you’re fine with the wastefulness of returning full pages as a response to AJAX requests. But when you want to do it better than that, you end up with more boilerplate to make it possible to return small bits of HTML. And this isn’t really about Alpine AJAX specifically; htmx would lead to the exact same place. The fundamental tension is in the HTML-over-the-wire approach itself: the server has to know which fragments of HTML to return, and that means structuring your views and templates around it. You trade the complexity of a JavaScript frontend for a different kind of complexity on the server. Progressive enhancement adds to that complexity. Every form handling view needs an is_alpine check with a redirect fallback, every form needs to work both as a regular POST and as an AJAX submit. If I dropped progressive enhancement and just required JavaScript, those redirect fallbacks and the branching that comes with them would disappear. The views would be simpler. But I think progressive enhancement is important enough to keep in place. Would I use Alpine AJAX (or htmx) again? Honestly: probably not. I have a lot more fun when building frontends with SvelteKit, and for me Django shines when I limit its role to an API, ORM, and admin interface - not so much HTML templates and form handling. Building composable and reusable UI components is so much more natural in SvelteKit, and the performance is simply better (once the initial JS bundle has been downloaded and parsed). But am I going to throw away my current project’s code and redo it all? No, I am not. Django with Alpine AJAX is a nice change of scenery, it’s a nice playground I don’t usually get to play in. I think I ended up with a good compromise, and hey: I still don’t have to build and maintain a separate API, API docs, and frontend.

25.03.2026 10:16:50

Informační Technologie
2 dny

#727 ‚Äì MARCH 24, 2026 View in Browser ¬ª Sunsetting Jazzband Jazzband is (was) an Open Source cooperative for creating and maintaining projects. It maintained over 70 projects many of which were for the Django ecosystem. They included django-redis, django-nose, django-taggit, and loads more. Jazzband’s model has become untenable from the mass of AI submissions, and so it is winding down. JAZZBAND.CO Spyder: Your IDE for Data Science Development in Python Learn how to use the Spyder IDE, a Python code editor built for scientists, engineers, and data analysts working with data-heavy workflows. REAL PYTHON How Nordstrom Built Self-Healing Docs with AI Agents What if your docs updated themselves from Slack conversations? Join our webinar to learn how Nordstrom uses Temporal and AI agents to detect knowledge gaps, extract insights from chat history, and automatically generate pull requests ‚Üí TEMPORAL sponsor Comparing Python Packages for A/B Test Analysis A practical comparison of tea-tasting, Pingouin, statsmodels, and SciPy for A/B test analysis, with code examples. EVGENY IVANOV PyCon US 2026 Conference Schedule Announced PYCON.ORG Python Jobs Python + AI Content Specialist (Anywhere) Real Python More Python Jobs >>> Articles & Tutorials Guido Interviews Brett Cannon After last year‚Äôs release of the Python documentary, Guido decided to explore those contributors who weren‚Äôt mentioned. He now has an going series of written interviews with a variety of contributors from Python‚Äôs first 25 years. This interview is with Brett Cannon. GUIDO VAN ROSSUM “Requests” Needs You to Test Type Hints Requests is a popular HTTP client library and is frequently in the top 10 PyPI downloads. There is an on-going effort to add type hinting to the library and to make sure the next release causes few issues, they need help testing. SETH LARSON Depot CI: Built for the Agent era Depot CI: A new CI engine. Fast by design. Your GitHub Actions workflows, running on a fundamentally faster engine ‚Äî instant job startup, parallel steps, full debuggability, per-second billing. One command to migrate ‚Üí DEPOT sponsor Fire and Forget (Or Never) With Python’s Asyncio Python’s asyncio.create_task() can silently garbage collect fire-and-forget tasks in 3.12+, meaning they might never run. This article shows you how to use the background tasks set pattern to fix it. MICHAEL KENNEDY Thoughts on OpenAI Acquiring Astral Astral is the organization behind popular Python tools such as uv, ruff, and ty. Recently it was announced that OpenAI would be acquiring Astral. This opinion piece discusses the possible impact. SIMON WILLISON Standard Error Standard error is one of the two writable file streams that is used for printing errors, warning messages, or any outputs that shouldn’t be mixed with the main program. TREY HUNNER üéì Master Python’s Core Principles (Live Course) Transform your Python skills in just eight weeks, with live expert guidance. No more second-guessing if your code is “Pythonic enough.” Master Python’s object model, advanced iteration, decorators, and clean system design through live instruction and hands-on practice in a small group setting: REAL PYTHON sponsor Textual: Creating a Custom Checkbox The Textual TUI framework allows for a lot of customization and control over its widgets. This article shows you how to change a checkbox widget to give it a new look. MIKE DRISCOLL A Practical Guide to Python Supply Chain Security A comprehensive guide to securing your Python dependencies from ingestion to deployment, covering linting, pinning, vulnerability scanning, SBOMs, and attestations BERN√ÅT G√ÅBOR Python 3.15’s JIT Is Now Back on Track Python 3.15‚Äôs JIT is now back on track, meeting the performance targets the team set for itself. Progress was a bit bumpy and this post talks about what happened. KEN JIN From Properties to Descriptors This article is about the weird and wonderful world of descriptors in Python. Learn what they’re for and how to use one of the trickier Python concepts. STEPHEN GRUPPETTA Modern Python Monorepo With uv and prek Talk Python interviews Amogh Desai and Jarek Potiuk and they talk about how to use a monorepo with uv and prek. TALK PYTHON podcast Downloading Files From URLs With Python Learn to download files from URLs with Python using urllib and requests, including data streaming for large files. REAL PYTHON course Building a Django Chat App With WebSockets This article covers the best ways to build a chat app in Django using Websockets and ASGI. HONEYBADGER.IO ‚Ä¢ Shared by Addison Curtis Projects & Code zsh-safe-venv-auto: ZSH Plugin That Activates Python venvs GITHUB.COM/MAVWOLVERINE mypyc: Compile Type Annotated Python to Fast C Extensions GITHUB.COM/MYPYC pristan: The Simplest Way to Create a Plugin System GITHUB.COM/MUTATING ‚Ä¢ Shared by pomponchik MaskOps: PII Masking as a Native Polars Plugin GITHUB.COM/FCARVAJALBROWN ‚Ä¢ Shared by Felipe Carvajal Brown django-tasks-rq: RQ Based Django Tasks Backend GITHUB.COM/REALORANGEONE Events Weekly Real Python Office Hours Q&A (Virtual) March 25, 2026 REALPYTHON.COM Django Girls Colombia 2026 March 28 to March 29, 2026 DJANGOGIRLS.ORG Python Sheffield March 31, 2026 GOOGLE.COM Python Southwest Florida (PySWFL) April 1, 2026 MEETUP.COM STL Python April 2, 2026 MEETUP.COM Happy Pythoning!This was PyCoder’s Weekly Issue #727.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

24.03.2026 19:30:00

Informační Technologie
2 dny

On April 14, I‚Äôll be teaching a new 4‚Äëhour live workshop for O‚ÄôReilly: Building Data Apps with Streamlit and Copilot. If you work in Python and want to turn your analyses into interactive, shareable tools, this workshop is designed for you. We‚Äôll start from a Jupyter notebook and build a complete Streamlit app that lets users explore a dataset through interactive controls, charts, and maps. Along the way, we‚Äôll use Copilot to speed up development and discover Streamlit features more efficiently. What we‚Äôll cover Structuring a Streamlit app Working with user input (select boxes, filters, etc.) Creating interactive graphics with Plotly Organizing the UI with columns and tabs Deploying your app to Streamlit Cloud The workshop is hands‚Äëon: you‚Äôll build the app step‚Äëby‚Äëstep, and by the end you‚Äôll have a working project you can adapt to your own data. What You’ll Build Here‚Äôs a screenshot from the app we‚Äôll build together: The app lets users choose a state and demographic statistic, explore how it changes over time, and view the data as a chart, map, or table. And while the example uses demographic data, the skills you‚Äôll learn‚Äîstructuring an app, building interactive controls, and creating dynamic visualizations‚Äîapply to any Streamlit project you want to build. Who is this for? Data scientists and analysts who want to make their work more interactive Python users who want to build dashboards without learning web development Anyone curious about Streamlit or Copilot If you‚Äôre interested, I‚Äôd love to have you join. Registration is open now.

24.03.2026 16:46:35

Informační Technologie
2 dny

When I was a child, I used to pace up and down the corridor at home pretending to teach an imaginary group of people. It was my way of learning.It still is.I started writing about Python as a learning tool—to help me sort things out in my head, weave a thread through all the disparate bits of information, clarify my thoughts, make sure any knowledge gaps are filled.I started The Python Coding Stack three years ago. That’s the first of the mystery numbers in the post’s title revealed! I had written elsewhere before, but at the time of starting The Stack, I felt I had found my own “Python voice”. I had been teaching Python for nearly a decade. I had written plenty of articles, but setting up The Python Coding Stack was a deliberate choice to step up. I was still writing articles primarily for my own benefit, but now I was also writing for others, hoping they would want to learn the way I do.And 7,600 subscribers apparently do. Thank you for joining this journey, whether you were there three years ago or you joined a few days ago. If you just joined, there’s an archive of 121 articles, most of them long-form tutorials or step-by-step guides.A special thank you to the 33 subscribers who chose to upgrade to premium and join The Club. It may only amount to 3 coffees per month for you, but it makes a difference to me. Thank you! I hope you’ve been enjoying the exclusive content for The Club members.And perhaps, if a few more decide to join you in The Club (you can surely cut three coffees out of your monthly intake!), then this publication may even become self-sustainable. Your support can make a real difference—if you value these articles and want to see them continue, please consider joining now. At the moment, I give up a lot of my time for free to think about my articles, plan them, draft them, review them technically, review them linguistically, get them ready for publication, and then publish.Subscribe nowI mentioned my live teaching earlier. My written articles and my live teaching have a lot in common. One of the hardest things about teaching (or communication in general) is to place yourself in the learner’s mindset. I know, it’s obvious. But it’s hard.A string of words can make perfect sense to someone who already understands the concept, but it’s hard to understand for someone learning it for the first time.Going from A to B can be a smooth reasoning step for an expert, but requires a few more intermediate steps for a novice.A trait that helps me in my teaching is my ability to recall the pain points I had when learning a topic. Everything is easy once you know it, but hard when you don’t. Remembering that what comes easily today was once hard is essential for teaching, whatever the format.I often use my writing to help me with my live teaching. And, just as often, I discover a new angle or insight during live teaching that I then put down in writing. It’s a two-way street. Both forms of communication—live teaching and writing—complement each other.All this to say that I enjoy writing these articles. They’re useful for me personally, and for my work teaching Python. And I hope they’re useful for you.121 articles. The cliché would have me say that choosing favourites is like choosing a favourite child. But that’s not the case. There are articles I like less than others. So, I tried to put together a highlights reel of the past three years. Here we go…The Centre of the Python Universe • ObjectsMy Life • The Autobiography of a Python ObjectThe One About the £5 Note and the Trip to the Coffee Shop • The Difference Between 'is' and '==' in PythonWhen a Duck Calls Out • On Duck Typing and Callables in PythonA Stroll Across Python • Fancy and Not So Fancy ToolsThe Curious Little Shop at The End of My Street • Python’s f-stringsDo You Really Know How 'or' And 'and' Work in Python?If You Find if..else in List Comprehensions Confusing, Read This, Else…Where Do I Store This? • Data Types and StructuresClearing The Deque—Tidying My Daughter’s Soft Toys • A Python Picture StoryAre Python Dictionaries Ordered Data Structures?Hermione’s Undetectable Extension Charm: Revealed (Using Python)bytes: The Lesser-Known Python Built-In Sequence • And Understanding UTF-8 EncodingWhere’s William? How Quickly Can You Find Him? • What’s a Python Hashable Object?And here are the posts in The Club section of this publication, exclusive for premium subscribers: The Club | The Python Coding StackHappy 3rd Birthday to The Python Coding Stack. From just under a hundred people in the first week to 7,600+ today, this community has grown thanks to your enthusiasm.Let’s keep up the momentum—consider joining The Club today! Your membership can help ensure The Python Coding Stack continues on its path, stronger than ever.Subscribe nowPhoto by Daria ObymahaFor more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.And you can find out more about me at stephengruppetta.com

24.03.2026 14:16:07

Informační Technologie
2 dny

This article covers a useful LLM pattern where you ask the LLM to write code to solve a problem instead of asking it to solve the problem directly. The problem of merging two transcripts I had two files that contained two halves of the transcript of an audio recording and I wanted to use an LLM to merge the two halves. There were three reasons that stopped me from simply copying part 2 and pasting it after part 1: the two transcripts overlapped (the end of part 1 was after the start of part 2); the timestamps for part 2 started from 0, so they were missing an offset; and speaker identification was not consistent. I uploaded the two halves into ChatGPT and asked it to merge the two transcripts, fix the timestamps and the speaker identification, but to not change the text. The result I got back was a ridiculous attempt at providing the full transcript, with two sections that supposedly represented parts of either transcript I could just copy and paste confidently, and a couple of other ridiculous blunders. Instead of fighting ChatGPT, I decided to use a very useful pattern I learned about last year. Ask the LLM to write code for it Instead of asking ChatGPT to merge the transcripts, I could ask it to analyse them, find the solutions to the three problems listed above, and then write code that would merge the transcripts. Since I was confident that ChatGPT could identify the overlap between the two files; use the overlap information to compute the timestamp offset required for part 2; and figure out you had to swap the two speakers in part 2, I knew ChatGPT would be able to write a Python script that could read from both files and apply a couple of string operations to the second part. This yielded much better results in two ways. ChatGPT was able to find the solutions for the three problems above and write a script that fixed them automatically. That was the goal. On top of that, since ChatGPT had a very clear implicit goal — get the final merged transcript — and since running Python code is something that ChatGPT can do, ChatGPT even ran the script for me and produced two artifacts at the end: the full Python script I could run against the two halves if I wanted; and the final, fixed transcript. This is an example application of a really useful LLM pattern: Don't ask the LLM to solve a problem. Instead, ask it to write code that solves the problem. As another visual example, it's much easier to ask an LLM to write a Python script that draws a path that solves a maze (that's just a couple hundred of lines of code) than it is to upload an image and ask the LLM to draw a valid path on the picture of a maze. Try it yourself!

24.03.2026 13:16:00

Informační Technologie
2 dny

Nintendo has multiple popular racing franchises, including Mario Kart, Kirby Air Ride, and F-Zero. Each of these franchises spans multiple titles and consoles and have ways to play with more than one console in a single shared “game lobby”. This feature makes these games interesting for LAN parties, where you have many players, consoles, and games in one area. What does it mean to be the most “LAN-party-able” Nintendo racing game? There are three metrics I found interesting for this question: most-players, price-per-player, and “real-estate”-per-player (aka: TVs/consoles). There is a different “best” game according to each of these metrics. I've compiled the data and created a small calculator to compare: Game (Any) F-Zero (FZ) Super Mario Kart (SMK) Mario Kart 64 (MK64) F-Zero: GP Legend (FZGL) F-Zero: Maximum Velocity (FZMV) Mario Kart: Super Circuit (MKSS) F-Zero GX (FZGX) Kirby Air Ride (KAR) Mario Kart: Double Dash!! (MKDD) Mario Kart Wii (MKWII) Mario Kart DS (MKDS) Mario Kart 7 (MK7) Mario Kart 8 (MK8) F-Zero 99 (FZ99) Mario Kart 8 Deluxe (MK8D) Kirby Air Riders (KARS) Mario Kart World (MKWRD) Players (Any) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Mode (Any) Local LAN Online Share GameMode#PPriceConsolesGamesCablesAdaptersTVs Price includes consoles, controllers, games, cables, adapters. Prices sourced from Pricecharting and Microcenter (no affiliation) for March 2026. TVs and Mountain Dew™ not included in price. “Cables” means Link Cables for the GBA or Ethernet for the GameCube, Switch, and Switch 2. “Adapters” means ETH2GC for the GameCube, “USB-A to Ethernet” adapters for docked Switch 1, or “USB-C to Ethernet” for undocked Switch 1 and Switch 2. Which games are the winners? Best price per player: Mario Kart DS using DS Download Play. With this method you can buy a single copy of Mario Kart DS and play concurrently with up to 8 players. With 8 players would cost $62 per player. Best real-estate per player: Mario Kart: Double Dash!! in LAN mode. This is somehow the only game in the list that supports 4 players per console in LAN mode, all future games only support 1 or 2 players per console in LAN mode. This means fewer TVs and consoles per player and helps alleviate slightly higher prices these days for GameCube gear. Most players: F-Zero 99 hands down wins most players with up to 99 per lobby. Mario Kart World also supports up to 24 concurrent players in LAN mode, but needing to use 12 Switch 2's compared to F-Zero being a Switch 1 title means a much higher price. Why did you build this? This post was inspired by hosting a LAN party with friends for my birthday. Researching and verifying the limits for each game and console took a lot of work, so I hope this can save someone some time wrangling all these numbers in the future. After all this research, the games I chose for the LAN party are “Mario Kart: Double Dash” on the GameCube, “Mario Kart 8 Deluxe” and “F-Zero 99” on the Nintendo Switch, and “Mario Kart World” on the Nintendo Switch 2. The data and the script I used to generate this calculator is all open source. If there are mistakes or improvements, please submit a patch. Please note that I don't own a DS, 3DS, or Wii U so the numbers there are more likely to be incorrect. The rest of this blog post will be about the specifics for each console and game. What features does each game support? Each game supports one of four multiplayer modes: Local, LAN, Online, and Share. Availability depends on both the console and the game. GameConsoleYearLocalLANOnlineShare Super Mario KartSNES1992YESNONONO F-ZeroSNES1990YESNONONO Mario Kart 64N641996YESNONONO Mario Kart: Super CircuitGBA2001NOYESNOYES (1) F-Zero: Maximum VelocityGBA2003NOYESNOYES (1) F-Zero: GP LegendGBA2003NOYESNOYES (1) Mario Kart: Double Dash!!GameCube2003YESYESNONO Kirby Air RideGameCube2003YESYESNONO F-Zero GXGameCube2003YESNONONO Mario Kart DSDS2005NOYESYESYES (2) Mario Kart WiiWii2008YESNOYESNO Mario Kart 73DS/2DS2011NOYESYESYES (2) Mario Kart 8Wii U2014YESNOYESNO Mario Kart 8 DeluxeSwitch2017YESYESYESNO F-Zero 99Switch2023NONOYESNO Mario Kart WorldSwitch 22025YESYESYESNO Kirby Air RidersSwitch 22025YESYESYESYES (3) (1): via GBA Single-Pak Link Mode (2): via DS Download Play (3): via Nintendo Switch GameShare Pricing Here is a table with costs from March 2026 for each game, console, and accessory: GameConsoleControllerCableAdapter F-Zero$18SNES$129$17 Super Mario Kart$34 Mario Kart 64$46N64$87$16 Mario Kart: Super Circuit$17GBA$73$24 F-Zero: GP Legend$30 F-Zero: Maximum Velocity$20 F-Zero GX$61GameCube$119$28$5$25 Kirby Air Ride$71 Mario Kart: Double Dash!!$60 Mario Kart DS$17DS$60 Mario Kart Wii$35Wii$60$13 Mario Kart 7$143DS/2DS$94 Mario Kart 8$10Wii U$129$30 Mario Kart 8 Deluxe$35Switch$135$36$5 F-Zero 99$0 Kirby Air Riders$50Switch 2$395$36$5$20 Mario Kart World$58 Game Boy Advance The Game Boy Advance (GBA) had multiple features that made the console perfect for multi-console multiplayer. These features being the new GBA Link Cables, allowing more than two consoles to connect, and the Single-Pak Link Play that all the GBA titles on this list support. GBA Link Cables have three terminals per cable, a larger grey plug, a smaller blue plug, and a smaller blue socket in the middle of the cable. The grey and blue plugs both fit into a GBA console, but the larger grey plug does not fit into the small blue socket on the cable. To connect four players, two players connect like normal, and then player three connects their blue plug into the blue socket between the existing connected consoles. In the end, this means you only need N-1 cables for N consoles and that a single player (player 1) ends up with a blue plug in their console. The second feature “Single-Pak Link Play” allowed a single player to own a cartridge and to share the game with other connected consoles if the game supports the mode. This mode is also sometimes called “Multiboot” or “Joyboot”. Because the game ROM data itself is transferred to the other consoles, this often made for load-times during startup and meant all content wasn't playable by all players. For example, in Mario Kart: Super Circuit only a subset of maps and characters were available in Single-Pay Link Play mode. GameCube The GameCube was Nintendo’s first internet-enabled console, even if only 8 titles supported the feature. Only three titles supported LAN play, that being Kirby Air Ride, Mario Kart: Double Dash!!, and 1080° Avalanche. The GameCube Broadband Adapter is legendarily expensive now due to how few games supported the feature at all. Nowadays, it's advised to modify your GameCube with a method to boot into Swiss and using Swiss’s “Emulate Broadband Adapter” feature with an ETH2GC adapter. These adapters are cheap, even if you don't assemble them yourself. There are a few variants, ETH2GC Sidecar, ETH2GC Lite, and ETH2GC Card Slot. I am currently running a ETH2GC Sidecar and ETH2GC Card Slot and both work together with Mario Kart: Double Dash!! and Kirby Air Ride. DS, Wii, Wii U, 3DS The DS supported a feature called DS Download Play, which similar to Single-Pak Link play for the GBA, allowed a playing single game cartridge with up to 8 consoles. The 3DS also supported this feature. Beyond this I didn't have a lot to say about these consoles, as they aren't my interest. If you have more to say, maybe write your own blog post and send it to me after! Nintendo Switch The first-generation Switch is a quirky console being both portable and dockable. Because the console itself has a screen on it, this means you can play games without a TV. However, to access “LAN” modes in Mario Kart 8 Deluxe you need to be physically linked via Ethernet. Problem is... the original Switch doesn't have an Ethernet port. And depending on whether you're playing docked or undocked: you'll need a different adapter! If you're playing undocked, using the Switch screen as your “TV” you'll need to buy a USB-C to Ethernet adapter. If you're playing docked you'll need to buy a USB-A to Ethernet adapter, as the dock itself doesn't have a USB-C port except for power delivery. Switch OLED docks do have an Ethernet port, so if you have one of those models then you won't need an adapter in docked mode. Test your adapters before your LAN party, as not every adapter will be accepted by the Switch! Both the first-generation Switch and Switch 2 also come with two controllers (“Joycons”) per console, meaning you'll have to buy fewer controllers to reach high player counts. Nintendo Switch 2 The Switch 2 is similar to the Switch 1, being both portable and dockable. Nintendo included an Ethernet port on the Switch 2 dock, and also USB-A and USB-C ports, too. So if you're playing without a TV, you'll still need a USB-C to Ethernet adapter for your Switch 2. The Switch 2 adds support for a new mode: Game Share. This mode is similar to DS Download Play and Single-Pak Link in terms of functionality but in terms of implementation: it's local game streaming! Even cooler, this feature means that first-generation Switch consoles can “play” some Switch 2 games like Kirby Air Riders without sacrificing any features. Mario Kart: Double Dash!! The game supports up to 16 players, however you can only have 8 total karts per race. Double Dash allows two players to share a single kart, with one player driving and other throwing items. LAN mode also doesn't allow selecting which character or kart you are, you are assigned which kart you will be driving. Kirby Air Ride Despite supporting LAN mode and having 8 Kirby colors, you are only allowed to have 4 players maximum within City Trial or Air Ride modes. So, the LAN mode only allows having fewer people sharing a TV. F-Zero GX I saw a reference online to a Nintendo-hosted online leaderboard via passwords or ghosts for Time Attack, but wasn't able to find an actual source that this happened. If you have a reference or video, please send it my way! Otherwise, I may be mis-remembering something else that I read in the past. F-Zero 99 First F-Zero game in over 20 years (sorry F-Zero fans, us Kirby Air Ride fans know how you feel). Allows up to 99 players in a private lobby. Game is free, but you need a Switch or Switch 2 console per player. There isn't any local or LAN multiplayer, so once Nintendo Switch Online is sunsetted this game won't be playable with multiplayer. Mario Kart 8 Deluxe, Mario Kart World In both Mario Kart 8 Deluxe and Mario Kart World the LAN play mode is hidden behind holding L+R and pressing the left joystick down on the "Wireless Play" option. Kirby Air Riders There is very little information online about Kirby Air Riders LAN multiplayer mode. The official Nintendo documentation doesn't describe the allowed number of players per console. If anyone has more definitive data, please reach out. Nintendo Switch GameShare allows playing online with four players with only one cartridge. Nintendo Switch GameShare for Kirby Air Riders is also compatible with Nintendo Switch consoles. Note that on launch Kirby Air Riders was very disappointing with online play only allowing one player per console. An update added support for more than one player per console for wireless play. LAN mode still requires 1 console per player. Local Multiplayer Multiple players play the game on a single console with different controllers. Game1234 Super Mario Kart$163$180 F-Zero$147$164 Mario Kart 64$133$149$165$181 Mario Kart: Double Dash!!$179$207$235$263 Kirby Air Ride$190$218$246$274 F-Zero GX$180$208$236$264 Mario Kart Wii$95$108$121$134 Mario Kart 8$139$169$199$229 Mario Kart 8 Deluxe$170$170$206$242 Mario Kart World$453$453$489$525 Kirby Air Riders$445$445$481$517 LAN Multiplayer Consoles directly communicate to each other through wired, short-range wireless, or “local” internet connections, such as ethernet running to an internet switch/router or are directly wired together through ethernet or console-specific link cable. What distinguishes this mode from “Wireless” is that this mode will continue to work even after Nintendo servers have been discontinued. Game23456789101112131415161718192021222324 Mario Kart: Super Circuit$204$318$432 F-Zero: Maximum Velocity$210$327$444 F-Zero: GP Legend$230$357$484 Mario Kart: Double Dash!!$418$446$474$502$530$558$586$795$823$851$879$1088$1116$1144$1172 Kirby Air Ride$440$468$496 Mario Kart DS$154$231$308$385$462$539$616 Mario Kart 7$216$324$432$540$648$756$864 Mario Kart 8 Deluxe$390$390$390$585$585$780$780$975$975$1170$1170 Mario Kart World$956$956$956$1434$1434$1912$1912$2390$2390$2868$2868$3346$3346$3824$3824$4302$4302$4780$4780$5258$5258$5736$5736 Kirby Air Riders$940$1410$1880$2350$2820$3290$3760$4230$4700$5170$5640$6110$6580$7050$7520 Online Multiplayer Multiplayer where you can play against your friends or other players without needing to be on the same local network. This uses either Wi-Fi or Ethernet but connected to the global internet. This mode relies on a central service so once discontinued will either not be possible or will require modifications to your console, such as wiimmfi for the Nintendo Wii. Game23456789101112131415161718192021222324 Mario Kart DS$154$231$308 Mario Kart Wii$190$203$216$311$324$419$432$527$540$635$648 Mario Kart 7$216$324$432 Mario Kart 8$278$308$338$477$507$646$676 Mario Kart 8 Deluxe$340$340$340$510$510$680$680 F-Zero 99$270$405$540$675$810$945$1080$1215$1350$1485$1620$1755$1890$2025$2160$2295$2430$2565$2700$2835$2970$3105$3240 Mario Kart World$906$906$906$1359$1359$1812$1812$2265$2265$2718$2718$3171$3171$3624$3624 Kirby Air Riders$890$890$890$1335$1335$1780$1780 Mario Kart Wii, Mario Kart DS have mods you can apply to play online in private servers. Mario Kart: Double Dash!! and Kirby Air Ride also have mods that allow wireless play which wasn't possible when the games were first released. Share Multiplayer (Single-Pak Link, DS Download Play, Game Share) This multiplayer mode allows playing with local players that own a console, but not the game. This usually results in a degraded experience for players that don't own the game, such as a reduced number of playable characters, karts, or racetracks. Nintendo Switch “GameShare” uses game streaming between consoles. Game2345678 Mario Kart: Super Circuit$187$284$381 F-Zero: Maximum Velocity$190$287$384 F-Zero: GP Legend$200$297$394 Mario Kart DS$137$197$257$317$377$437$497 Mario Kart 7$202$296$390$484$578$672$766 Kirby Air Riders$840$1235$1630 All Multiplayers Here is a table comparing all multiplayer modes and their costs: GameMode123456789101112131415161718192021222324 Super Mario KartLOCAL$163$180 F-ZeroLOCAL$147$164 Mario Kart 64LOCAL$133$149$165$181 Mario Kart: Super CircuitLOCAL$90 LAN$90$204$318$432 SHARE$90$187$284$381 F-Zero: Maximum VelocityLOCAL$93 LAN$93$210$327$444 SHARE$93$190$287$384 F-Zero: GP LegendLOCAL$103 LAN$103$230$357$484 SHARE$103$200$297$394 Mario Kart: Double Dash!!LOCAL$179$207$235$263 LAN$179$418$446$474$502$530$558$586$795$823$851$879$1088$1116$1144$1172 Kirby Air RideLOCAL$190$218$246$274 LAN$190$440$468$496 F-Zero GXLOCAL$180$208$236$264 Mario Kart DSLOCAL$77 LAN$77$154$231$308$385$462$539$616 ONLINE$77$154$231$308 SHARE$77$137$197$257$317$377$437$497 Mario Kart WiiLOCAL$95$108$121$134 ONLINE$95$190$203$216$311$324$419$432$527$540$635$648 Mario Kart 7LOCAL$108 LAN$108$216$324$432$540$648$756$864 ONLINE$108$216$324$432 SHARE$108$202$296$390$484$578$672$766 Mario Kart 8LOCAL$139$169$199$229 ONLINE$139$278$308$338$477$507$646$676 Mario Kart 8 DeluxeLOCAL$170$170$206$242 LAN$170$390$390$390$585$585$780$780$975$975$1170$1170 ONLINE$170$340$340$340$510$510$680$680 F-Zero 99LOCAL$135 ONLINE$135$270$405$540$675$810$945$1080$1215$1350$1485$1620$1755$1890$2025$2160$2295$2430$2565$2700$2835$2970$3105$3240 Mario Kart WorldLOCAL$453$453$489$525 LAN$453$956$956$956$1434$1434$1912$1912$2390$2390$2868$2868$3346$3346$3824$3824$4302$4302$4780$4780$5258$5258$5736$5736 ONLINE$453$906$906$906$1359$1359$1812$1812$2265$2265$2718$2718$3171$3171$3624$3624 Kirby Air RidersLOCAL$445$445$481$517 LAN$445$940$1410$1880$2350$2820$3290$3760$4230$4700$5170$5640$6110$6580$7050$7520 ONLINE$445$890$890$890$1335$1335$1780$1780 SHARE$445$840$1235$1630 Thanks for keeping RSS alive! ♥

24.03.2026 00:00:00

Informační Technologie
3 dny

TL;DR; I converted Python Bytes from Quart/Flask to the Rust-backed Robyn framework and benchmarked it with Locust. There was no meaningful speed or memory improvement - and Robyn actually used more memory. Framework maturity, ecosystem depth, and app server flexibility still matter more than raw benchmark numbers. Last week I played with the idea of replacing Quart (async Flask ) with Robyn for our bigger web apps. Robyn is built almost entirely in Rust, and in the benchmarks, it looks dramatically better. Not just a little bit faster, but 25 times faster. However, if you’ve been around the block for a while, you know that benchmarks and how things work for your app and your situation are not always the same thing. So I picked the simplest complex app that I run, Python Bytes, and converted it entirely to run on the Robyn framework. This took a few hours of careful work and experimenting, and I even had to create a Python package to allow Robyn to run the Chameleon template language. When I was done, it was time to fire up Locust and see if there was any dramatic performance improvements. I certainly wasn’t expecting 25x, but 2x? 1.5x? That would have been really impressive. Did Robyn improve speed or memory over Flask? The results were in and the answer was just about no difference in RPS or latency. It turns out that almost all the computational time is in the logic of our app, which of course doesn’t change and I never intended to change it. Requests per second: No meaningful difference between Robyn and Quart/Granian Latency: Essentially identical under load Memory: Robyn actually used more memory, not less Another area I was hoping to optimize is memory. Our web apps use a lot of memory for what they are. They’re certainly not trivial. But running a couple of copies of the app in a web garden was using way more than I expected that they should. And I thought moving closer to Rust might have positive influences for memory too. It turns out the Robyn fork actually used more memory, not less, than the current setup. After all, our web apps run on Granian, which is mostly Rust right up to the Flask framework itself already. Why Flask’s maturity still beats Robyn’s speed So our fun little spike to explore the Robyn framework is going to remain just that. I’m sticking with Flask. I’ve talked about this before, but maturity in a library or framework is a big plus. The ecosystem for Flask/Quart is much bigger and more polished than for the smaller Robyn framework. More than that, the app server runtime for Robyn is much less polished than some of the pluggable app servers out there. Think Granian, Gunicorn, uvicorn, etc. For example, Robyn does not support web garden process recycling. In many servers you can say after five hours or 10,000 requests or something like that, just slowly take the request out of a process, spin up a new one and shut down the old one just to keep things fresh. This helps if you’re using some library that holds on to too many caches or some other weird memory thing. Was the Robyn experiment a waste of time? Even though I spent maybe close to six hours working on this exploration and decided not to use it, I still found it super valuable. I created the fun Chameleon Robyn package to help people using Robyn have a greater choice of template languages. I got to see my apps from multiple perspectives. I built out some tooling for Claude that I’m going to write about later that is generally really awesome. And I ended up saving significant memory for some of my biggest web apps by just spending more time thinking about how I’m running them currently in Granian and Flask.

23.03.2026 16:31:19

Informační Technologie
3 dny

On March 19, OpenAI announced that it would acquire Astral, the company behind uv, Ruff, and ty. The Astral team, led by founder Charlie Marsh, will join OpenAI’s Codex team. The deal is subject to regulatory approval. First and foremost: congratulations to Charlie Marsh and the entire Astral team. They shipped some of the most beloved tools in the Python ecosystem and raised the bar for what developer tooling can be. This acquisition is a reflection of the impact they’ve had. This is big news for the Python ecosystem, and it matters to us at JetBrains. Here’s our perspective. What Astral built In just two years, Astral transformed Python tooling. Their tools now see hundreds of millions of downloads every month, and for good reason: uv is a blazing-fast package and environment manager that unifies functionality from pip, venv, pyenv, pipx, and more into a single tool. With around 124 million monthly downloads, it has quickly become the default choice for many Python developers. Ruff is an extremely fast linter and formatter, written in Rust. For many teams it has replaced flake8, isort, and black entirely. ty is a new type checker for Python. It’s still early, and we‚Äôre already working on it with PyCharm. It’s showing promise. This is foundational infrastructure that millions of developers rely on every day. We’ve integrated both Ruff and uv into PyCharm because they substantially make Python development better. The risks are real, but manageable Change always carries risk, and acquisitions are no exception. The main concern here is straightforward: if Astral’s engineers get reassigned to OpenAI’s more commercial priorities, these tools could stagnate over time. The good news is that Astral’s tools are open-source under permissive licenses. The community can fork them if it ever comes to that. As Armin Ronacher has noted, uv is “very forkable and maintainable.” There‚Äôs no possible future where these tools go backwards. Both OpenAI and Astral have committed to continued open-source development. We take them at their word, and we hope for the best. Our commitment hasn’t changed JetBrains already has great working relationships with both the Astral and the Codex teams. We’ve been integrating Ruff and uv into PyCharm, and we will continue to do so. We‚Äôve submitted some upstream improvements to ty. Regardless of who owns these tools, our commitment to supporting the best Python tooling for our users stays the same. We’ll keep working with whoever maintains them. The Python ecosystem is stronger because of the work Astral has done. We hope this acquisition amplifies that work, not diminishes it. We’ll be watching closely, and we’ll keep building the best possible experience for Python developers in PyCharm.

23.03.2026 16:04:34

Informační Technologie
3 dny

Way back in 2005, lots of people (ordinary people, not just people who work in tech) used to have personal blogs where they wrote about things, rather than using third-party short-form social media sites. I was one of those people (though I wasn‚Äôt yet blogging on this specific site, which launched the following year). And back in 2005, and even earlier, people liked to have comment sections on their blogs where readers could leave their thoughts on posts. And that was an absolute magnet for spam. There were a few attempts to do something about this. One of them was Akismet, which launched that year and provided a web service you could send a comment (or other user-generated-content) submission to, and get back a classification of spam or not-spam. It turned out to be moderately popular, and is still around today. The folks behind Akismet also documented their API and set up an API key system so people could write their own clients/plugins for various programming languages and blog engines and content-management systems. And so pretty quickly after the debut of the Akismet service, Michael Foord, who the Python community, and the world, tragically lost at the beginning of 2025, wrote and published a Python library, which he appropriately called akismet, that acted as an API client for it. He published a total of five releases of his Python Akismet library over the next few years, and people started using it. Including me, because I had several use cases for spam filtering as a service. And for a while, things were good. But then Python 3 was released, and people started getting serious about migrating to it, and Michael, who had been promoted into the Python core team, didn‚Äôt have a ton of time to work on it. So I met up with him at a conference in 2015, and offered to maintain the Akismet library, and he graciously accepted the offer, imported a copy of his working tree into a GitHub repository for me, and gave me access to publish new packages. In the process of porting the code to support both Python 2 and 3 (as was the fashion at the time), I did some rewriting and refactoring, mostly focused on simplifying the configuration process and the internals. Some configuration mechanisms were deprecated in favor of either explicitly passing in the appropriate values, or else using the 12-factor approach of storing configuration in environment variables, and the internal HTTP request stack, based entirely on the somewhat-cumbersome (at that time) Python standard library, was replaced with a dependency on requests. The result was akismet 1.0, published in 2017. Over the next six years, I periodically pushed out small releases of akismet, mostly focused on keeping up with upstream Python version support (and finally going Python-3-only, in 2020 when Python 2.7 reached its end of upstream support). But beginning in 2024, I embarked on a more ambitious project which spanned multiple releases and turned into a complete rewrite of akismet which finished a few months ago. So today I‚Äôd like to talk about why I chose to do that, how the process went, and what it produced. Why? Although I‚Äôm not generally a believer in the concept of software projects being ‚Äúdone‚Äù and thus no longer needing active work (in the same sense as ‚Äúa person isn‚Äôt really dead as long as their name is still spoken‚Äù, I believe a piece of software isn‚Äôt really “done” as long as it has at least one user), a major rewrite is still something that needs a justification. In the case of akismet, there were two specific things I wanted to accomplish that led me to this point. One was support for a specific feature of the Akismet API. The akismet Python client‚Äôs implementation of the most important API method‚Äîthe one that tells you whether Akismet thinks content is spam, called comment-check‚Äîhad, since the very first version, always returned a bool. Which at first sight makes sense, because the Akismet web service‚Äôs response body for that endpoint is plain text and is either the string true (Akismet thinks the content is spam) or the string false (Akismet thinks it isn‚Äôt spam). Except actually Akismet supports a third option: “blatant” spam, meaning Akismet is so confident in its determination that it thinks you can throw away the content without further review (while a normal ‚Äúspam‚Äù determination might still need a human to look at it and double-check). It signals this by returning the true text response and also setting a custom HTTP response header (X-Akismet-Pro-Tip: discard). But the akismet Python client couldn‚Äôt usefully expose this, since the original API design of the client chose to have this method return a two-value bool instead of some other type that could handle a three-value situation. And any attempt to fix it would necessarily change the return type, which would be a breaking change. The other big motivating factor for a rewrite was the rise of asynchronous Python via async and await, originally introduced in Python 3.5. The async Python ecosystem has grown tremendously, and I wanted to have a version of akismet that could support async/non-blocking HTTP requests to the Akismet web service. Keep it classy? The first thing I did was spend a bit of time exploring whether I could replace the entire class-based design of the library. Since the very first version back in 2005, the akismet library had always provided its client as a class (named Akismet) with one method for each supported Akismet HTTP API method. But it‚Äôs always worth asking if a class is actually the right abstraction. Very often it‚Äôs not! And while Python is an object-oriented language and allows you to write classes, it doesn‚Äôt require you to write them. So I spent a little while sketching out a purely function-based API. One immediate issue with this was how to handle the API credentials. Akismet requires you to obtain an API key and to register one or more sites which will use that API key, and most Akismet web API operations require that both the API key and the current site be sent with the request. There‚Äôs also a verify-key API operation which lets you submit a key and site and tells you if they‚Äôre valid; if you don‚Äôt use this, and accidentally start trying to use the rest of the Akismet API with an invalid key and/or site, the other Akismet API operations send back responses with a body of invalid. As noted above, the 1.0 release already nudged users of akismet in the direction of putting config in the environment, so reading the key and site from env variables was already well-supported. But some people probably can‚Äôt, or won‚Äôt want to, use environment variables for configuration. For example: they might have multiple sets of Akismet credentials in a multi-tenant application, and need to explicitly pass different sets of credentials depending on which site they’re performing checks for. So in any function-based interface, all the functions would not only need to be able to read configuration from the environment (which at least could be factored out into a helper function), they‚Äôd also need to explicitly accept credentials as optional arguments. That complicates the argument signatures (which are already somewhat gnarly because of all the optional information you can provide to Akismet to help with spam determinations), and makes the API start to look cumbersome. This was a clue that the function-based approach was probably not the right one: if a bunch of functions all have to accept extra arguments for a common piece of data they all need, it’s a sign that they may really want to be a class which just has the necessary data available internally. The other big sticking point was how to handle credential verification. It requires an HTTP request/response to Akismet, so ideally you‚Äôd do this once (per set of credentials per process). Say, if you‚Äôre using Akismet in a web application, you‚Äôd want to check your credentials at process startup, and then just treat them as known-good for the lifetime of the process after that. Which is what the the existing class-based code did: it performed a verify-key on instantiation and then could re-use the verified credentials after that point (or raise an immediate exception if the credentials were missing or invalid). I really like the ergonomics of that, since it makes it much more difficult to create an Akismet client in an invalid/misconfigured state, but it basically requires some sort of shared state. Even if the API key and site URL are read from the environment or passed as arguments every time, there needs to be some sort of additional information kept by the client code to indicate they‚Äôve been validated. It still would be possible to do this in a function-based interface. It could implicitly verify each new key/site pair on first use, and either keep a full list of ones that had been verified or maybe some sort of LRU cache of them. Or there could be an explicit function for introducing a new key/site pair and verifying them. But the end result of that is a secretly-stateful module full of functions that rely on (and in some cases act on) the state; at that point the case for it being a class is pretty overwhelming. As an aside, I find that spending a bit of time thinking about, or perhaps even writing sample documentation for, how to use a hypothetical API often uncovers issues like this one. Also, for a lot of people it‚Äôs seemingly a lot easier, psychologically, to throw away documentation than to throw away even barely-working code. One class or two? Another idea that I rejected pretty quickly was trying to stick to a single Akismet client class. There is a trend of libraries and frameworks providing both sync and async code paths in the same class, often using a naming scheme which prefixes the async versions of the methods with an a (like method_name() for the sync version and amethod_name() for async), but it wasn‚Äôt really compatible with what I wanted to do. As mentioned above, I liked the ergonomics of having the client automatically validate your API key and site URL, but doing that in a single class supporting both sync and async has a problem: which code path to use to perform the automatic credential validation? Users who want async wouldn‚Äôt be happy about a synchronous/blocking request being automatically issued. And trying to choose the async path by default would introduce issues of how to safely obtain a running event loop (and not just any event loop, but an instance of the particular event loop implementation the end user of the library actually wants). So I made the decision to have two client classes, one sync and one async. As a nice bonus, this meant I could do all the work of rewriting in new classes with new names. That would let me mark the old Akismet class as deprecated but not have to immediately remove it or break its API, giving users of akismet plenty of notice of what was going on and a chance to migrate to the new clients. So I started working on the new client classes, calling them akismet.SyncClient and akismet.AsyncClient to be as boringly clear as possible about what they‚Äôre for. How to handle async, part one Unfortunately, the two-class solution didn‚Äôt fully solve the issue of how to handle the automatic credential validation. On the old Akismet client class it had been easy, and on the new SyncClient class it would still be easy, because the __init__() method could perform a verify-key operation before returning, and raise an exception if the credentials weren‚Äôt found or were invalid. But in Python, __init__() cannot be (usefully) async, which posed the tricky question of how to perform automatic credential validation at instantiation time for AsyncClient. As I dug into this I considered a few different options, and at one point even thought about going back to the one-class approach just to be able to issue a single HTTP request at instantiation without needing an event loop. But I wanted AsyncClient to be truly and thoroughly async, so I ended up settling for a compromise solution, implemented in two phases: Both SyncClient and AsyncClient were given an alternate constructor method named validated_client(). Alternate constructors can be usefully async, so the AsyncClient version could be implemented as an async method. I documented that if you‚Äôre directly constructing a client instance you intend to keep around for a while, this is the preferred constructor since it will perform automatic credential validation for you (direct instantiation via __init__() will not, on either class). And then‚Ķ I implemented the context-manager protocol for SyncClient and the async context-manager protocol for AsyncClient. This allows constructing the sync client in a with statement, or an async with statement for AsyncClient. And since async with is an async execution context, it can issue an async HTTP request for credential validation. So you can get automatic credential validation from either approach, depending on your needs: import akismet # Long-lived client object you'll keep around: sync_client = akismet.SyncClient.validated_client() async_client = await akismet.AsyncClient.validated_client() # Or for the duration of a "with" block, cleaned up at exit: with akismet.SyncClient() as sync_client: # Do things... async with akismet.AsyncClient() as async_client: # Do things... Most Python libraries can benefit from these sorts of conveniences, so I’d recommend investing time into learning how to implement them. If you’re looking for ideas, Lynn Root’s “The Design of Everyday APIs” covers a lot of ways to make your own code easier to use. How to handle async, part deux The other thing about writing code that supports both sync and async operations is how to handle the things they have in common. There are a few different ways to do this: you can write one implementation and have the other one call it. Or you can write two full implementations and live with the duplication. Or you can try to separate the I/O and the pure logic as much as possible, and reuse the logic while duplicating only the I/O code (or, since the two implementations aren‚Äôt perfect duplicates, writing two I/O implementations which heavily rhyme). For akismet, I went with a hybrid of the last two of these approaches. I started out with my two classes each fully implementing everything they needed, including a lot of duplicate code between them (in fact, the first draft was just one class which was then copy/pasted and async-ified to produce the other). Then I gradually extracted the non-I/O bits into a common module they could both import from and use, building up a library of helpers for things like validating arguments, preparing requests, processing the responses, and so on. One final object-oriented design decision here (or, I guess, not object-oriented decision): that common code is a set of functions in a module. It‚Äôs not a class. It‚Äôs not stateful the way the clients themselves are: turning an Akismet web API response into the desired Python return value, or validating a set of arguments and turning them into the correct request parameters (to pick a couple examples) are literally pure functions, whose outputs are dependent solely on their inputs. And the common code also isn‚Äôt some sort of abstract base class that the two concrete clients would inherit from. An akismet.SyncClient and an akismet.AsyncClient are not two different subtypes of a parent ‚ÄúAkismet client‚Äù class or interface! Because of the different calling conventions of sync and async Python, there is no public parent interface that they share or could be substitutable for. The current code of akismet still has some duplication, primarily around error handling since the try/except blocks need to wrap the correct version of their respective I/O operations, and I might be able to achieve some further refactoring to reduce that to the absolute minimum (for example, by splitting out a bunch of duplicated except clauses into a single common pattern-matching implementation now that Python 3.10 is the minimum supported version). But I‚Äôm not in a big hurry to do that; the current code is, I think, in a pretty reasonable state. Enumerating the options As I mentioned back at the start of this post, the akismet library historically used a Python bool to indicate the result of a spam-checking operation: either the content was spam (True) or it wasn’t (False). Which makes a lot of sense at first glance, and also matches the way the Akismet web service behaves: for content it thinks is spam, the HTTP response has a body consisting of the string true, and for content that it doesn‚Äôt think is spam the response body is the string false. But for many years now, the Akismet web service has actually supported three possible values, with the third option being “blatant” spam, spam so obvious that it can simply be thrown away with no further human review. Akismet signals this by returning the true response body, and then adding a custom HTTP header to the response: X-Akismet-Pro-Tip, with a value of discard. Python has had support for enums (via the enum module in the standard library) since Python 3.4, so that seemed the most natural way to represent the possible results. The enum module lets you use lots of different data types for enum values, but I went with an integer-valued enum (enum.IntEnum) for this, because it lets developers still work with the result as a pseudo-boolean type if they don’t care about the extra information from the third option (since in Python 0 is false and all other integers are true). Python historical trivia Originally, Python did not have a built-in boolean type, and the typical convention was similar to C, using the integers 0 and 1 to indicate false/true. Python phased in a real boolean type early in the Python 2 days. First, the Python 2.2 release series (technically, Python 2.2.1) assigned the built-in names False and True to the integer values 0 and 1, and introduced a built-in bool() function which returned the integer truth value of its argument. Then in Python 2.3, the bool type was formally introduced, and was implemented as a subclass of int, constrained to have only two instances. Those instances are bound to the names False and True and have the integer values 0 and 1. That’s how Python’s bool still works today: it’s still a subclass of int, and so you can use a bool anywhere an int is called for, and do arithmetic with booleans if you really want to, though this isn’t really useful except for writing deliberately-obfuscated code. For more details on the history and decision process behind Python’s bool type, check out PEP 285 and this blog post from Guido van Rossum. The only tricky thing here was how to name the third enum member. The first two were HAM and SPAM to match the way Akismet describes them. The third value is described as “blatant spam” in some documentation, but is represented by the string “discard” in responses, so BLATANT_SPAM and DISCARD both seemed like reasonable options. I ended up choosing DISCARD; it probably doesn’t matter much, but I like having the name match the actual value of the response header. The enum itself is named CheckResponse since it represents the response values of the spam-checking operation (Akismet actually calls it comment-check because that’s what its original name was, despite the fact Akismet now supports sending other types of content besides comments). Bring your own HTTP client Back when I put together the 1.0 release, akismet adopted the requests library as a dependency, which greatly simplified the process of issuing HTTP requests to the Akismet web API. As part of the more recent rewrite, I switched instead to the Python HTTPX library, which has an API broadly compatible with requests but also, importantly, provides both sync and async implementations. Async httpx requires the use of a client object (the equivalent of a requests.Session), so the Akismet client classes each internally construct the appropriate type of httpx object: httpx.Client for akismet.SyncClient, and httpx.AsyncClient for akismet.AsyncClient. And since the internal usage was switching from directly calling the function-based API of requests to using HTTP client objects, it seemed like a good idea to also allow passing in your own HTTP client object in the constructors of the Akismet client classes. These are annotated as httpx.Client/httpx.AsyncClient, but as a practical matter anything with a compatible API will work. One immediate benefit of this is it’s easier to accommodate situations like HTTP proxies, and server environments where all outbound HTTP requests must go through a particular proxy. You can just create the appropriate type of HTTP client object with the correct proxy settings, and pass it to the constructor of the Akismet client class: import akismet import httpx from your_app.config import settings akismet_client = akismet.SyncClient.validated_client( http_client=httpx.Client( proxy=settings.PROXY_URL, headers={"User-Agent": akismet.USER_AGENT} ) ) But an even bigger benefit came a little bit later on, when I started working on improvements to akismet‘s testing story. Testing should be easy Right here, right now, I’m not going to get into a deep debate about how to define “unit” versus “integration” tests or which types you should be writing. I’ll just say that historically, libraries which make HTTP requests have been some of my least favorite code to test, whether as the author of the library or as a user of it verifying my usage. Far too often this ends up with fragile piles of patched-in mock objects to try to avoid the slowdowns (and other potential side effects and even dangers) of making real requests to a live, remote service during a test run. I do think some fully end-to-end tests making real requests are necessary and valuable, but they probably should not be used as part of the main test suite that you run every time you’re making changes in local development. Fortunately, httpx offers a feature that I wrote about a few years ago, which greatly simplifies both akismet‘s own test suite, and your ability to test your usage of it: swappable HTTP transports which you can drop in to affect HTTP client behavior, including a MockTransport that doesn’t make real requests but lets you programmatically supply responses. So akismet ships with two testing variants of its API clients: akismet.TestSyncClient and akismet.TestAsyncClient. They’re subclasses of the real ones, but they use the ability to swap out HTTP clients (covered above) to plug in custom HTTP clients with MockTransport and hard-coded stock responses. This lets you write code like: import akismet class AlwaysSpam(akismet.TestSyncClient): comment_check_response = akismet.CheckResponse.SPAM and then use it in tests. That test client above will never issue a real HTTP request, and will always label any content you check with it as spam. You can also set the attribute verify_key_response to False on a test client to have it always fail API key verification, if you want to test your handling of that situation. This means you can test your use of akismet without having to build piles of custom mocks and patch them in to the right places. You can just drop in instances of appropriately-configured test clients, and rely on their behavior. If I ever became King of Programming, with the ability to issue enforceable decrees, requiring every network-interacting library to provide this kind of testing-friendly version of its core constructs would be among them. But since I don‚Äôt have that power, I do what I can by providing it in my own libraries. (py)Testing should be easy In the Python ecosystem there are two major testing frameworks: The unittest module in the Python standard library, which is a direct port to Python of the xUnit style of test frameworks seen in many other languages (including xUnit style naming conventions, which don‚Äôt match typical Python naming conventions). The third-party pytest framework, which aims to be a more natively “Pythonic” testing framework and encourages function- rather than class-based tests and heavy use of dependency injection (which it calls fixtures). For a long time I stuck to unittest, or unittest-derived testing tools like the ones that ship with Django. Although I understand and appreciate the particular separation of concerns pytest is going for, I found its fixture system a bit too magical for my taste; I personally prefer dependency injection to use explicit registration so I can know what‚Äôs available, versus the implicit way pytest discovers fixtures based on their presence or absence in particularly-named locations. But pytest pretty consistently shows up as more popular and more broadly used in surveys of the Python community, and every place I‚Äôve worked for the last decade or so has used it. So I decided to port akismet‚Äôs tests to pytest, and in the process decided to write a pytest plugin to help users of akismet with their own tests. That meant writing a pytest plugin to automatically provide a set of dependency-injection fixtures. There are four fixtures: two sync and two async, with each flavor getting a fixture to provide a client class object (which lets you test instantiation-time behavior like API key verification failures), and a fixture to provide an already-constructed client object. Configuration is through a custom pytest mark called akismet_client, which accepts arguments specifying the desired behavior. For example: import akismet import pytest @pytest.mark.akismet_client(comment_check_response=akismet.CheckResponse.DISCARD) def test_akismet_discard_response(akismet_sync_client: akismet.SyncClient): # Inside this test, akismet_sync_client's comment_check() will always # return DISCARD. @pytest.mark.akismet_client(verify_key_response=False) def test_akismet_fails_key_verification(akismet_sync_class: type[akismet.SyncClient]): # API key verification will always fail on this class. with pytest.raises(akismet.APIKeyError): akismet_sync_class.validated_client() Odds and ends Python has had the ability to add annotations to function and method signatures since 3.0, and more recently gained the ability to annotate attributes as well; originally, no specific use case was mandated for this feature, but everybody used it for type hints, so now that‚Äôs the official use case for annotations. I‚Äôve had a lot of concerns about the way type hinting and type checking have been implemented for Python, largely around the fact that idiomatic Python really wants to be a structurally-typed language, or as some people have called it ‚Äúinterfacely-typed‚Äù, rather than nominally-typed. Which is to say: in Python you almost never care about the actual exact type name of something, you care about the interfaces (nowadays, called ‚Äúprotocols‚Äù in Python typing-speak) it implements. So you don‚Äôt care whether something is precisely an instance of list, you care about it being iterable or indexable or whatever. On top of which, some design choices made in the development of type-hinted Python have made it (as I understand it) impossible to distribute a single-file module with type hints and have type checkers actually pick them up. Which was a problem for akismet, because traditionally it was a single-file module, installing a file named akismet.py containing all its code. But as part of the rewrite I was reorganizing akismet into multiple files, so that objection no longer held, and eventually I went ahead and began running mypy as a type checker as part of the CI suite for akismet. The type annotations had been added earlier, because I find them useful as inline documentation even if I‚Äôm not running a type checker (and the Sphinx documentation tool, which all my projects use, will automatically extract them to document argument signatures for you). I did have to make some changes to work around mypy, though It didn‚Äôt find any bugs, but did uncover a few things that were written in ways it couldn‚Äôt handle, and maybe I‚Äôll write about those in more detail another time. As part of splitting akismet up into multiple files, I also went with an approach I‚Äôve used on a few other projects, of prefixing most file names with an underscore (i.e., the async client is defined in a file named _async_client.py, not async_client.py). By convention, this marks the files in question as ‚Äúprivate‚Äù, and though Python doesn‚Äôt enforce that, many common Python linters will flag it. The things that are meant to be supported public API are exported via the __all__ declaration of the akismet package. I also switched the version numbering scheme to Calendar Versioning. I don‚Äôt generally trust version schemes that try to encode information about API stability or breaking changes into the version number, but a date-based version number at least tells you how old something is and gives you a general idea of whether it‚Äôs still being actively maintained. There are also a few dev-only changes:‚Ä® * Local dev environment management and packaging are handled by PDM and its package-build backend. Of the current crop of clean-sheet modern Python packaging tools, PDM is my personal favorite, so it‚Äôs what my personal projects are using. * I added a Makefile which can execute a lot of common developer tasks, including setting up the local dev environment with proper dependencies, and running the full CI suite or subsets of its checks. * As mentioned above, the test suite moved from unittest to pytest, using AnyIO‚Äôs plugin for supporting async tests in pytest. There‚Äôs a lot of use of pytest parametrization to generate test cases, so the number of test cases grew a lot, but it‚Äôs still pretty fast‚Äîaround half a second for each Python version being tested, on my laptop. The full CI suite, testing every supported Python version and running a bunch of linters and packaging checks, takes around 30 seconds on my laptop, and about a minute and a half on GitHub CI. That‚Äôs it (for now) In October of last year I released akismet 25.10.0 (and then 25.10.1 to fix a documentation error, because there‚Äôs always something wrong with a big release), which completed the rewrite process by finally removing the old Akismet client class. At this point I think akismet is feature-complete unless the Akismet web service itself changes, so although there were more frequent releases over a period of about a year and a half as I did the rewrite, it‚Äôs likely the cadence will settle down now to one a year (to handle supporting new Python versions as they come out) unless someone finds a bug. Overall, I think the rewrite was an interesting process, because it was pretty drastic (I believe it touched literally every pre-existing line of code, and added a lot of new code), but also‚Ķ not that drastic? If you were previously using akismet with your configuration in environment variables (as recommended), I think the only change you‚Äôd need to make is rewriting imports from akismet.Akismet to akismet.SyncClient. The mechanism for manually passing in configuration changed, but I believe that and the new client class names were the only actual breaking changes in the entire rewrite; everything else was adding features/functionality or reworking the internals in ways that didn‚Äôt affect public API. I had hoped to write this up sooner, but I‚Äôve struggled with this post for a while now, because I still have trouble with the fact that Michael‚Äôs gone, and every time I sat down to write I was reminded of that. It‚Äôs heartbreaking to know I‚Äôll never run into him at a conference again. I‚Äôll miss chatting with him. I‚Äôll miss his energy. I‚Äôm thankful for all he gave to the Python community over many years, and I wish I could tell him that one more time. And though it‚Äôs a small thing, I hope I‚Äôve managed to honor his work and to repay some of his kindness and his trust in me by being a good steward of his package. I have no idea whether Akismet the service will still be around in another 20 years, or whether I‚Äôll still be around or writing code or maintaining this Python package in that case, but I‚Äôd like to think I‚Äôve done my part to make sure it‚Äôs on sound footing to last that long, or longer.

23.03.2026 14:09:23

Informační Technologie
3 dny

Learning Python can be genuinely hard, and it’s normal to struggle with fundamental concepts. Research has shown that note-taking is invaluable when learning new things. This guide will help you get the most out of your learning efforts by showing you how to take better notes as you walk through an existing tutorial and keep handwritten notes on the side: In this guide, you’ll begin by briefly learning about the benefits of note-taking. Then, you’ll follow along with an existing Real Python tutorial as you perform note-taking steps to help make the information in the tutorial really stick. To help you stay organized as you practice, download the Python Note-Taking Worksheet below. It outlines the process you’ll learn here and provides a repeatable framework you can use with future tutorials: Get Your PDF: Click here to download your free Python Note-Taking Worksheet that outlines that note-taking process. Take the Quiz: Test your knowledge with our interactive “How to Use Note-Taking to Learn Python” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz How to Use Note-Taking to Learn Python Test your understanding of note-taking techniques that help you learn Python more effectively and retain what you study. What Is Python Note-Taking? In the context of learning, note-taking is the process of recording information from a source while you’re consuming it. A traditional example is a student jotting down key concepts during a lecture. Another example is typing out lines of code or unfamiliar words while watching a video course, listening to a presentation, or reading a learning resource. In this guide, Python note-taking refers to taking notes specific to learning Python. People take notes for a variety of reasons. Usually, the intent is to return to the notes at a later time to remind the note-taker of the information covered during the learning session. In addition to the value of having a physical set of notes to refer back to, studies have found that the act of taking notes alone improves a student’s ability to recall information on a topic. This guide focuses on handwritten note-taking—that is, using a writing utensil and paper. Several studies suggest that this form of note-taking is especially effective for understanding a topic and remembering it later. If taking notes by hand isn’t viable for you, don’t worry! The concepts presented here should be applicable to other forms of note-taking as well. Prerequisites Since this guide focuses on taking notes while learning Python programming, you’ll start by referencing the Real Python tutorial Python for Loops: The Pythonic Way. This resource is a strong choice because it clearly explains a fundamental programming concept that you’ll use throughout your Python journey. Once you have the resource open in your browser, set aside a few pieces of paper and have a pen or pencil ready. Alternatively, you can take notes on a tablet with a stylus or another writing tool. Generally, taking notes by hand has a stronger impact on learning than other methods, such as typing into a text document. For more information on the effectiveness of taking notes by hand versus typing, see this article from the Harvard Graduate School of Education. Step 1: Write Down Major Concepts With your note-taking tools ready, start by skimming the learning resource. Usually, you want to look at the major headings to see what topics the material covers. For Real Python content, you can instead just look at the table of contents at the top of the page, since this lists the main sections. The major headings for your example resource are as follows: Getting Started with the Python for Loop Traversing Built-In Collections in Python Using Advanced for Loop Syntax Exploring Pythonic Looping Techniques Understanding Common Pitfalls in for Loops Using for Loops vs Comprehensions Using async for Loops for Asynchronous Iteration The list above doesn’t include subheadings like “Sequences: Lists, Tuples, Strings, and Ranges” under “Traversing Built-In Collections in Python”. For now, stick to top-level headings. Read the full article at https://realpython.com/python-note-taking-guide/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

23.03.2026 14:00:00

Informační Technologie
3 dny

I got access to Claude Max for 6 months, as a promotional move Anthropic made to Open Source Software contributors. My main OSS impact is as a maintainer for NumPy, but I decided to see what claude-code could to for PyPy's failing 3.11 tests. Most of these failures are edge cases: error messages that differ from CPython, or debugging tools that fail in certain cases. I was worried about letting an AI agent loose on my development machine. I noticed a post by Patrick McCanna (thanks Patrick!) that pointed to using bubblewrap to sandbox the agent. So I set it all up and (hopefully securely) pointed claude-code at some tests. Setting up There were a few steps to make sure I didn't open myself up to obvious gotchas. There are stories about agents wiping out data bases, or deleting mail boxes. Bubblewrap First I needed to see what bubblewrap does. I followed the instructions in the blog post to set things up with some minor variations: sudo apt install bubblewrap I couldn't run bwrap. After digging around a bit, I found I needed to add an exception for appamor on Ubuntu 24.04: sudo bash -c 'cat > /etc/apparmor.d/bwrap << EOF abi <abi/4.0>, include <tunables/global> profile bwrap /usr/bin/bwrap flags=(unconfined) { userns, } EOF' sudo apparmor_parser -r /etc/apparmor.d/bwrap Then bwrap would run. It is all locked down by default, so I opened up some exceptions. The arguments are pretty self-explanatory. Ubuntu spreads the executables around the operating system, so I needed access to various directories. I wanted a /tmp for running pytest. I also wanted the prompt to reflect the use of bubblewrap, so changed the hostname: cat << 'EOL' >> ./run_bwrap.sh function call_bwrap() { bwrap \ --ro-bind /usr /usr \ --ro-bind /etc /etc \ --ro-bind /run /run \ --symlink usr/lib /lib \ --symlink usr/lib64 /lib64 \ --symlink usr/bin /bin \ --proc /proc \ --dev /dev \ --bind $(pwd) $(pwd) \ --chdir $(pwd) \ --unshare-user --unshare-pid --unshare-ipc --unshare-uts --unshare-cgroup \ --die-with-parent \ --hostname bwrap \ --tmpfs /tmp \ /bin/bash "$@" } EOL source ./run_bwrap.sh call_bwrap # now I am in a sandboxed bash shell # play around, try seeing other directories, getting sudo, or writing outside # the sandbox exit I did not do --unshare-network since, after all, I want to use claude and that needs network access. I did add rw access to $(pwd) since I want it to edit code in the current directory, that is the whole point. Basic claude After trying out bubblewrap and convincing myself it does actually work, I installed claude code curl -fsSL https://claude.ai/install.sh | bash Really Anthropic, this is the best way to install claude? No dpkg? I ran claude once (unsafely) to get logged in. It opened a webpage, and saved the login to the oathAccount field in ~/.claude.json. Now I changed my bash script to this to get claude to run inside the bubblewrap sandbox: cat << 'EOL' >> ./run_claude.sh claude-safe() { bwrap \ --ro-bind /usr /usr \ --ro-bind /etc /etc \ --ro-bind /run /run \ --ro-bind "$HOME/.local/share/claude" "$HOME/.local/share/claude" \ --symlink usr/lib /lib \ --symlink usr/lib64 /lib64 \ --symlink usr/bin /bin \ --symlink "$HOME/.local/share/claude/versions/2.1.81" "$HOME/.local/bin/claude" \ --proc /proc \ --dev /dev \ --bind $(pwd) $(pwd) \ --bind "$HOME/.claude" "$HOME/.claude" \ --bind "$HOME/.claude.json" "$HOME/.claude.json" \ --chdir $(pwd) \ --unshare-user --unshare-pid --unshare-ipc --unshare-uts --unshare-cgroup \ --die-with-parent \ --hostname bwrap \ --tmpfs /tmp \ --setenv PATH "$HOME/.local/bin:$PATH" \ claude "$@" } EOL source ./run_claude.sh claude-safe Now I can use claude. Note it needs some more directories in order to run. This script hard-codes the version, in the future YMMV. I want it to be able to look at github, and also my local checkout of cpython so it can examine differences. I created a read-only token by clicking on my avatar in the upper right corner of a github we page, then going to Settings ‚Üí Developer settings ‚Üí Personal access tokens ‚Üí Fine-grained tokens ‚Üí Generate new token. Since pypy is in the pypy org, I used "Repository owner: pypy", "Repository access: pypy (only)" and "Permissions: Contents". Then I made doubly sure the token permissions were read-only. And checked again. Then I copied the token to the bash script. I also added a ro-bind to the cpython checkout, so I could tell claude code where to look for CPython implementations of missing PyPy functionality. --ro-bind "$HOME/oss/cpython" "$HOME/oss/cpython" \ --setenv GH_TOKEN "hah, sharing my token would not have been smart" \ Claude /sandbox Claude comes with its own sandbox, configured by using the /sandbox command. I chose the defaults, which prevents malicious code in the repo from accessing the file system and the network. I was missing some packages to get this to work. Claude would hang until I installed them, and I needed to kill it with kill. sudo apt install socat sudo npm install -g @anthropic-ai/sandbox-runtime Final touches One last thing that I discovered later: I needed to give claude access to some grepping and git tools. While git should be locked down externally so it cannot push to the repo, I do want claude to look at other issues and pull requests in read-only mode. So I added a local .claude/settings.json file inside the repo (see below for which directory to do this): { "permissions": { "allow": [ "Bash(sed*)", "Bash(grep*)", "Bash(cat*)", "Bash(find*)", "Bash(rg*)", "Bash(python*)", "Bash(pytest*)" ] } } Then I made git ignore it, even when doing a git clean, in a local (not part of the repo) configuration echo -n .claude >> ~/.config/git/ignore What about git push? I don't want claude messing around with the upstream repo, only read access. But I did not actively prevent git push. So instead of using my actual pypy repo, I cloned it to a separate directory and did not add a remote pointing to github.com. Fixing tests - easy Now that everything is set up (I hope I remembered everything), I could start asking questions. The technique I chose was to feed claude the whole test failure from the buildbot. So starting from the buildbot py3.11 summary, click on one of the F links and copy-paste all that into the claude prompt. It didn't take long for claude to come up with solutions for the long-standing ctype error missing exception which turned out to be due to an missing error trap when already handling an error. Also a CTYPES_MAX_ARGCOUNT check was missing. At first, claude wanted to change the ctypes code from CPython's stdlib, and so I had to make it clear that claude was not to touch the files in lib-python. They are copied verbatim from CPython and should not be modified without really good reasons. The fix to raise TypeError rather than Attribute Error for deleting ctype object's value was maybe a little trickier: claude needed to create its own property class and use it in assignments. The fix for a failing test for a correct repr of a ctypes array was a little more involved. Claude needed to figure out that newmemoryview was raising an exception, dive into the RPython implementation and fix the problem, and then also fix a pure-python __buffer__ shape edge case error. There were more, but you get the idea. With a little bit of coaching, and by showing claude where the CPython implementation was, more tests are now passing. Fixing tests - harder PyPy has a HPy backend. There were some test failures that were easy to fix (a handle not being closed, an annotation warning). But the big one was a problem with the context tracking before and after ffi function calls. In debug mode there is a check that the ffi call is done using the correct HPy context. It turns out to be tricky to hang on to a reference to a context in RPython since the context RPython object is pre-built. The solution, which took quite a few tokens and translation cycles to work out, was to assign the context on the C level, and have a getter to fish it out in RPython. Conclusion I started this journey not more than 24 hours ago, after some successful sessions using claude to refactor some web sites off hosting platforms and make them static pages. I was impressed enough to try coding with it from the terminal. It helps that I was given a generous budget to use Anthropic's tool. Claude seems capable of understanding the layers of PyPy: from the pure python stdlib to RPython and into the small amount of C code. I even asked it to examine a segfault in the recently released PyPy7.3.21, and it seems to have found the general area where there was a latent bug in the JIT. Like any tool, agentic programming must be used carefully to make sure it cannot do damage. I hope I closed the most obvious foot-guns, if you have other ideas of things I should do to protect myself while using an agent like this, I would love to hear about them.

23.03.2026 10:27:55

Investigativní

Komentáře

Kryptoměny a Ekonomika

Sport

Svět

Technologie a věda

Technologie a věda
1 den

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!Engineers Aren’t Bad at Communication. They’re Just Speaking to the Wrong Audience.There’s a persistent myth that engineers are bad communicators. In my experience, that’s not true.Engineers are often excellent communicators—inside their domain. We’re precise. We’re logical. We structure arguments clearly. We define terms. We reason from constraints.The breakdown happens when the audience changes.We’re used to speaking in highly technical language, surrounded by people who share our vocabulary. In that environment, shorthand and jargon are efficient. But outside that bubble, when talking to executives, product managers, marketing teams, or customers, that same precision can be confusing.The problem isn’t that we can’t communicate. It’s that we forget to translate.If you’ve ever explained a critical issue or error to a non-technical stakeholder, you’ve probably experienced this: You give a technically accurate explanation. They leave either more confused than before, or more alarmed than necessary.Suddenly you’re spending more time clarifying your explanation than fixing the issue.Under pressure, we default to what we know best—technical detail. But detail without context creates cognitive overload. The listener can’t tell what matters, what’s normal, and what’s dangerous.That’s when the “engineers can’t communicate” narrative shows up.In reality, we just skipped the translation step.The Writing Shortcut One of the simplest ways to improve written communication today is surprisingly easy: Run your explanation through an AI model and ask, “would this make sense to a non-technical audience? Where would someone get confused?”You can also say:“Rewrite this for an executive audience.”“What analogy would help explain this?”“Simplify this without losing accuracy.”Large language models are particularly good at identifying jargon and offering alternative framings. They’re essentially translation assistants.Analogies are especially powerful. If you’re explaining system latency, compare it to traffic congestion. If you’re describing technical debt, compare it to skipping maintenance on a house. If you’re explaining distributed systems, try using supply chain examples.The goal isn’t to “dumb it down.” It’s to map the unfamiliar onto something familiar.Before sending an email or report, ask yourself:Does this audience need to understand the mechanism, or just impact?Does this explanation help them make a decision?Have I defined terms they might not know?Translation When Speaking When speaking—especially in meetings or presentations—most engineers have one predictable habit: We speak too fast.Nerves speed us up. Speed causes filler words. Filler words dilute authority.To prevent that, follow a simple rule: Speak 10 to 15 percent slower than feels natural.Slowing down cuts down the number of times you say “um” and “uh”, gives you time to think, makes you sound more confident, and gives the listener time to process.Another rule: Say only what the audience needs to move forward.Explain just enough for the person to make a decision. If you overload someone with implementation details when they only need tradeoffs, you’ve made their job harder.The Real SkillThe key skill in communication is audience awareness.The same engineer who can clearly explain a concurrency bug to a peer can absolutely explain system risk to an executive. The difference is framing, vocabulary, and context. Not intelligence.In the age of AI, where code generation is increasingly commoditized, the ability to translate complexity into clarity is becoming a defining advantage.Engineers aren’t bad communicators. We just have to remember that outside our bubble, translation is part of the job.—BrianHow Robert Goddard’s Self-Reliance Crashed His DreamsRobert Goddard launched the first liquid-fueled rocket 100 years ago, but his legacy still has relevant lessons for today’s engineers. Although Goddard’s headstrong confidence in his ideas helped bring about the breakthrough, it later became an obstacle in what systems engineer Guru Madhavan calls “the alpha trap.” Madhavan writes: “We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people.”Read more here. Redefining the Software Engineering Profession for AIFor Communications of the ACM, two Microsoft engineers propose a model for software engineering in the age of AI: Making the growth of early-in-career developers an explicit organizational goal. Without hiring early-career workers, the profession’s talent pipeline will eventually dry up. So, they argue, companies must hire them and develop talent, even if that comes with a short-term dip in productivity. Read more here. IEEE Launches Global Virtual Career FairsLooking for a job? Last year, IEEE Industry Engagement hosted its first virtual career fair to connect recruiters and young professionals. Several more career fairs are now planned, including two upcoming regional events and a global career fair in June. At these fairs, you can participate in interactive sessions, chat with recruiters, and experience video interviews. Read more here.

25.03.2026 19:03:20

Technologie a věda
1 den

This is a sponsored article brought to you by General Motors. Visit their new Engineering Blog for more insights.Autonomous driving is one of the most demanding problems in physical AI. An automated system must interpret a chaotic, ever-changing world in real time—navigating uncertainty, predicting human behavior, and operating safely across an immense range of environments and edge cases.At General Motors, we approach this problem from a simple premise: while most moments on the road are predictable, the rare, ambiguous, and unexpected events — the long tail — are what ultimately defines whether an autonomous system is safe, reliable, and ready for deployment at scale. (Note: While here we discuss research and emerging technologies to solve the long tail required for full general autonomy, we also discuss our current approach or solving 99% of everyday autonomous driving in a deep dive on Compound AI.)As GM advances toward eyes-off highway driving, and ultimately toward fully autonomous vehicles, solving the long tail becomes the central engineering challenge. It requires developing systems that can be counted on to behave sensibly in the most unexpected conditions.GM is building scalable driving AI to meet that challenge — combining large-scale simulation, reinforcement learning, and foundation-model-based reasoning to train autonomous systems at a scale and speed that would be impossible in the real world alone.Stress-testing for the long tailLong-tail scenarios of autonomous driving come in a few varieties.Some are notable for their rareness. There’s a mattress on the road. A fire hydrant bursts. A massive power outage in San Francisco that disabled traffic lights required driverless vehicles to navigate never-before experienced challenges. These rare system-level interactions, especially in dense urban environments, show how unexpected edge cases can cascade at scale.But long-tail challenges don’t just come in the form of once-in-a-lifetime rarities. They also manifest as everyday scenarios that require characteristically human courtesy or common sense. How do you queue up for a spot without blocking traffic in a crowded parking lot? Or navigate a construction zone, guided by gesturing workers and ad-hoc signs? These are simple challenges for a human driver but require inventive engineering to handle flawlessly with a machine.Autonomous driving scenario demand curveDeploying vision language modelsOne tool GM is developing to tackle these nuanced scenarios is the use of Vision Language Action (VLA) models. Starting with a standard Vision Language Model, which leverages internet-scale knowledge to make sense of images, GM engineers use specialized decoding heads to fine-tune for distinct driving-related tasks. The resulting VLA can make sense of vehicle trajectories and detect 3D objects on top of its general image-recognition capabilities.These tuned models enable a vehicle to recognize that a police officer’s hand gesture overrides a red traffic light or to identify what a “loading zone” at a busy airport terminal might look like.These models can also generate reasoning traces that help engineers and safety operators understand why a maneuver occurred — an important tool for debugging, validation, and trust.Testing hazardous scenarios in high-fidelity simulationsThe trouble is: driving requires split-second reaction times so any excess latency poses an especially critical problem. To solve this, GM is developing a “Dual Frequency VLA.” This large-scale model runs at a lower frequency to make high-level semantic decisions (“Is that object in the road a branch or a cinder block?”), while a smaller, highly efficient model handles the immediate, high-frequency spatial control (steering and braking).This hybrid approach allows the vehicle to benefit from deep semantic reasoning without sacrificing the split-second reaction times required for safe driving.But dealing with an edge case safely requires that the model not only understand what it is looking at but also understand how to sensibly drive through the challenge it’s identified. For that, there is no substitute for experience.Which is why, each day, we run millions of high-fidelity closed loop simulations, equivalent to tens of thousands of human driving days, compressed into hours of simulation. We can replay actual events, modify real-world data to create new virtual scenarios, or design new ones entirely from scratch. This allows us to regularly test the system against hazardous scenarios that would be nearly impossible to encounter safely in the real world.Synthetic data for the hardest casesWhere do these simulated scenarios come from? GM engineers employ a whole host of AI technologies to produce novel training data that can model extreme situations while remaining grounded in reality.GM’s “Seed-to-Seed Translation” research, for instance, leverages diffusion models to transform existing real-world data, allowing a researcher to turn a clear-day recording into a rainy or foggy night while perfectly preserving the scene’s geometry. The result? A “domain change”—clear becomes rainy, but everything else remains the same.In addition, our GM World diffusion-based simulator allows us to synthesize entirely new traffic scenarios using natural language and spatial bounding boxes. We can summon entirely new scenarios with different weather patterns. We can also take an existing road scene and add challenging new elements, such as a vehicle cutting into our path.High-fidelity simulation isn’t always the best tool for every learning task. Photorealistic rendering is essential for training perception systems to recognize objects in varied conditions. But when the goal is teaching decision-making and tactical planning—when to merge, or how to navigate an intersection—the computationally expensive details matter less than spatial relationships and traffic dynamics. AI systems may need billions or even trillions of lightweight examples to support reinforcement learning, where models learn the rules of sensible driving through rapid trial and error rather than relying on imitation alone.To this end, General Motors has developed a proprietary, multi-agent reinforcement learning simulator, GM Gym, to serve as a closed-loop simulation environment that can both simulate high-fidelity sensor data, and model thousands of drivers per second in an abstract environment known as “Boxworld.”By focusing on essentials like spatial positioning, velocity and rules of the road while stripping away details like puddles and potholes, Boxworld creates a high-speed training environment for reinforcement learning models at incredible speeds, operating 50,000 times faster than real-time and simulating 1,000 km of driving per second of GPU time. It’s a method that allows us to not just imitate humans, but to develop driving models that have verifiable objective outcomes, like safety and progress.From abstract policy to real-world drivingOf course, the route from your home to your office does not run through Boxworld. It passes through a world of asphalt, shadows, and weather. So, to bring that conceptual expertise into the real world, GM is one of the first to employ a technique called “On Policy Distillation,” where engineers run their simulator in both modes simultaneously: the abstract, high-speed Boxworld and the high-fidelity sensor mode.Here, the reinforcement learning model—which has practiced countless abstract miles to develop a perfect “policy,” or driving strategy—acts as a teacher. It guides its “student,” the model that will eventually live in the car. This transfer of wisdom is incredibly efficient; just 30 minutes of distillation can capture the equivalent of 12 hours of raw reinforcement learning, allowing the real-world model to rapidly inherit the safety instincts its cousin painstakingly honed in simulation.Designing failures before they happenSimulation isn’t just about training the model to drive well, though; it’s also about trying to make it fail. To rigorously stress-test the system, GM utilizes a differentiable pipeline called SHIFT3D. Instead of just recreating the world, SHIFT3D actively modifies it to create “adversarial” objects designed to trick the perception system. The pipeline takes a standard object, like a sedan, and subtly morphs its shape and pose until it becomes a “challenging”, fun-house version that is harder for the AI to detect. Optimizing these failure modes is what allows engineers to preemptively discover safety risks before they ever appear on the road. Iteratively retraining the model on these generated “hard” objects has been shown to reduce near-miss collisions by over 30%, closing the safety gap on edge cases that might otherwise be missed.Even with advanced simulation and adversarial testing, a truly robust system must know its own limits. To enable safety in the face of the unknown, GM researchers add a specialized “Epistemic uncertainty head” to their models. This architectural addition allows the AI to distinguish between standard noise and genuine confusion. When the model encounters a scenario it doesn’t understand—a true “long tail” event—it signals high epistemic uncertainty. This acts as a principled proxy for data mining, automatically flagging the most confusing and high-value examples for engineers to analyze and add to the training set.This rigorous, multi-faceted approach—from “Boxworld” strategy to adversarial stress-testing—is General Motors’ proposed framework for solving the final 1% of autonomy. And while it serves as the foundation for future development, it also surfaces new research challenges that engineers must address.How do we balance the essentially unlimited data from Reinforcement Learning with the finite but richer data we get from real-world driving? How close can we get to full, human-like driving by writing down a reward function? Can we go beyond domain change to generate completely new scenarios with novel objects?Solving the long tail at scaleWorking toward solving the long tail of autonomy is not about a single model or technique. It requires an ecosystem — one that combines high-fidelity simulation with abstract learning environments, reinforcement learning with imitation, and semantic reasoning with split-second control.This approach does more than improve performance on average cases. It is designed to surface the rare, ambiguous, and difficult scenarios that determine whether autonomy is truly ready to operate without human supervision.There are still open research questions. How human-like can a driving policy become when optimized through reward functions? How do we best combine unlimited simulated experience with the richer priors embedded in real human driving? And how far can generative world models take us in creating meaningful, safety-critical edge cases?Answering these questions is central to the future of autonomous driving. At GM, we are building the tools, infrastructure, and research culture needed to address them — not at small scale, but at the scale required for real vehicles, real customers, and real roads.

25.03.2026 19:00:05

Technologie a věda
1 den

When you hear the term humanoid robot, you may think of C-3PO, the human-cyborg-relations android from Star Wars. C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.Before the release of Star Wars, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.It wasn’t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. Honda’s Prototype 2 (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.In recognition of that decades-old feat, P2 has been honored as an IEEE Milestone. The dedication ceremony is scheduled for 28 April at the Honda Collection Hall, located on the grounds of the Mobility Resort Motegi, in Japan. The machine is on display in the hall’s robotics exhibit, which showcases the evolution of Honda’s humanoid technology.In support of the Milestone nomination, members of the IEEE Nagoya (Japan) Section wrote: “This milestone demonstrated the feasibility of humanlike locomotion in machines, setting a new standard in robotics.” The Milestone proposal is available on the Engineering Technology and History Wiki.Developing a domestic androidIn 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and Toru Takenaka set out to develop what they called a “domestic robot” to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their research paper on the project.“We believe that a robot working within a household is the type of robot that consumers may find useful,” the authors wrote.But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.But no robot could do that at the time. The closest technologists got was the WABOT-1. Built in 1973 at Waseda University, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn’t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.To build an android, the Honda team began by analyzing how people move, using themselves as models.That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot’s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2’s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human’s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to a post about the project on Honda’s website. (Static walking is when the body’s center of mass is always within the foot’s sole. Humans walk with their center of mass below their navel.)The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel Everything About Robotics Explained.“P2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.” —IEEE Nagoya SectionThe Honda team installed rubber brushes on the bottom of the machine’s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)—which had made the robot lose its balance.Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot’s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motor actuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed Prototype 1 (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.For P2, four video cameras were installed in its head—two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.A computer with four microSparc II processors running a real-time operating system was added into the robot’s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.The hardware was enclosed in white-and-gray casing.P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly. P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.King Rose ArchivesThe following year, Honda’s engineers released the smaller and lighter P3. It was 160 cm tall and weighed 130 kg.In 2000 the popular ASIMO robot was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The most recent version was released in 2011. Honda has retired the robot.Honda P2’s influenceThanks to P2, today’s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at home.The machines are even being used for entertainment. During this year’s Spring Festival gala in Beijing, machines developed by Chinese startups Unitree Robotics, Galbot, Noetix, and MagicLab performed synchronized dances, martial arts, and backflips alongside human performers.“P2’s development shifted the focus of robotics from industrial applications to human-centric designs,” the Milestone sponsors explained in the wiki entry. “It inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.“It was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.”To learn more about robots, check out IEEE Spectrum’s guide.Recognition as an IEEE MilestoneA plaque recognizing Honda’s P2 robot as an IEEE Milestone is to be installed at the Honda Collection Hall. The plaque is to read:In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda’s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

25.03.2026 18:00:05

Technologie a věda
1 den
Technologie a věda
1 den

“Can I get an interview?” “Can I get a job when I graduate?” Those questions came from students during a candid discussion about artificial intelligence, capturing the anxiety many young people feel today. As companies adopt AI-driven interview screeners, restructure their workforces, and redirect billions of dollars toward AI infrastructure, students are increasingly unsure of what the future of work will look like.We had gathered people together at a coffee shop in Auburn, Alabama, for what we called an AI Café. The event was designed to confront concerns about AI directly, demystifying the technology while pushing back against the growing narrative of technological doom. AI is reshaping society at breathtaking speed. Yet the trajectory of this transformation is being charted primarily by for-profit tech companies, whose priorities revolve around market dominance rather than public welfare. Many people feel that AI is something being done to them rather than developed with them.As computer science and liberal arts faculty at Auburn University, we believe there is another path forward: one where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.The AI Café ModelLast November, we ran two public AI Cafés in Auburn. These were informal, 90-minute conversations between faculty, students, and community members about their experiences with AI. In these conversational forums, participants sat in clusters, questions flowed in multiple directions, and lived experience carried as much weight as technical expertise.We avoided jargon and resisted attempts to “correct” misconceptions, welcoming whatever emotions emerged. One ground rule proved crucial: keeping discussions in the present, asking participants where they encounter AI today. Without that focus, conversations could easily drift to sci-fi speculation. Historical analogies—to the printing press, electricity, and smartphones—helped people contextualize their reactions. And we found that without shared definitions of AI, people talked past each other; we learned to ask participants to name specific tools they were concerned about. Organizers Xaq Frohlich, Cheryl Seals, and Joan Harrell (right) held their first AI Café in a welcoming coffee shop and bookstore. Well RedMost important, we approached these events not as experts enlightening the masses, but as community members navigating complex change together.What We Learned by ListeningParticipants arrived with significant frustration. They felt that commercial interests were driving AI development “without consideration of public needs,” as one attendee put it. This echoed deeper anxieties about technology, from social media algorithms that amplify division to devices that profit from “engagement” and replace meaningful face-to-face connection. People aren’t simply “afraid of AI.” They’re weary of a pattern where powerful technologies reshape their lives while they have little say.Yet when given space to voice concerns without dismissal, something shifted. Participants didn’t want to stop AI development; they wanted to have a voice in it. When we asked “What would a human-centered AI future look like?” the conversation became constructive. People articulated priorities: fairness over efficiency, creativity over automation, dignity over convenience, community over individualism. The three organizers, all professors at Alabama’s Auburn University, say that including people from the liberal arts fields brought new perspectives to the discussions about AI. Well RedFor us as organizers, the experience was transformative. Hearing how AI affected people’s work, their children’s education, and their trust in information prompted us to consider dimensions we hadn’t fully grasped. Perhaps most striking was the gratitude participants expressed for being heard. It wasn’t about filling knowledge deficits; it was about mutual learning. The trust generated created a spillover effect, renewing faith that AI could serve the public interest if shaped through inclusive processes.How to Start Your Own AI CaféThe “deficit model” of science communication—where experts transmit knowledge to an uninformed public—has been discredited. Public resistance to emerging technologies reflects legitimate concerns about values, risks, and who controls decision-making. Our events point toward a better model.We urge engineering and liberal arts departments, professional societies, and community organizations worldwide to organize dialogues similar to our AI Cafés.We found that a few simple design choices made these conversations far more productive. Informal and welcoming spaces such as coffee shops, libraries, and community centers helped participants feel comfortable (and serving food and drinks helped too!). Starting with small-group discussions, where people talked with neighbors, produced more honest thinking and greater participation. Partnering with colleagues in the liberal arts brought additional perspectives on technology’s social dimensions. And by making a commitment to an ongoing series of events, we built trust.Facilitation also matters. Rather than leading with technical expertise, we began with values: We asked what kind of world participants wanted, and how AI might help or hinder that vision. We used analogies to earlier technologies to help people situate their reactions and grounded discussions in present realities, asking participants where they have encountered AI in their daily lives. We welcomed emotions constructively, transforming worry into problem solving by asking questions like: “What would you do about that?”Why Engineers Should Engage the PublicProfessional ethics codes remain abstract unless grounded in dialogue with affected communities. Conversations about what “responsible AI” means will look different in São Paulo than in Seoul, in Vienna than in Nairobi. What makes the AI Café model portable is its general principles: informal settings, values-first questions, present-tense focus, genuine listening.Without such engagement, ethical accountability quietly shifts to technical experts rather than remaining a shared public concern. If we let commercial interests define AI’s trajectory with minimal public input, it will only deepen divides and entrench inequities.AI will continue advancing whether or not we have public trust. But AI shaped through dialogue with communities will look fundamentally different from AI developed solely to pursue what’s technically possible or commercially profitable.The tools for this work aren’t technical; they’re social, requiring humility, patience, and genuine curiosity. The question isn’t whether AI will transform society. It’s whether that transformation will be done to people or with them. We believe scholars must choose the latter, and that starts with showing up in coffee shops and community centers to have conversations where we do less talking and more listening.The future of AI depends on it.

25.03.2026 14:00:05

Technologie a věda
1 den

U.S. doctoral programs in electrical engineering form the foundation of technological advancement, training the brightest minds in the world to research, develop, and design next-generation electronics, software, electrical infrastructure, and other high-tech products and systems. Elite institutions have long served as launchpads for the engineers behind tomorrow’s technology. Now that foundation is under strain.With U.S. universities increasingly entangled in political battles under the second Trump administration, uncertainty is beginning to ripple through doctoral admissions for electrical engineering programs. While some departments are reducing the number of spots available in anticipation of potential federal funding cuts, others are seeing their applicant pools shrink, particularly among international students, who make up a significant portion of their programs. In 2024 alone, U.S. universities awarded more than 2,000 doctorates in electrical and computer engineering, according to data from the National Center for Science and Engineering Statistics. The number of computing Ph.D.s grew significantly in the 2010s, according to data from the National Academies, but there is still high demand for those with advanced degrees across academia, government, and industry. Now, some universities point to warning signs of waning enrollment. Though not all engineers have Ph.D.s, if enrollment continues to shrink, fewer doctoral students could mean fewer engineers developing cutting-edge technology and training the next generation, potentially exacerbating existing labor shortages as global competition for tech talent intensifies.Federal funding cuts affect admissionsPublic universities in particular are feeling the strain because they rely heavily on federal grants to support doctoral students.The University of California, Los Angeles, for instance, must fund Ph.D. students for the duration of a degree—typically five years. In August 2025, the U.S. government pulled more than US $580 million in federal grants over allegations that the university failed to adequately address antisemitism on campus during student protests. A federal judge has since ordered the funding to be restored, but faculty began to worry that research support could be clawed back without notice, says Subramanian Iyer, distinguished professor at UC Los Angeles’s department of electrical and computer engineering.According to Iyer, departments across UC Los Angeles, including engineering, plan to scale back Ph.D. admissions this year. “The fear is that at some point, all this government money will be taken away,” Iyer says. “Lowering the admissions rate is just a way to prepare for that reality.”In response to a request for comment, a spokesperson for the U.S. National Science Foundation—a major source of federal research funding at UC Los Angeles and elsewhere—said, “NSF recognizes the essential role doctoral trainees play in the nation’s engineering and STEM enterprise” and noted several of the foundation’s awards and programs that support graduate research. Funding shocks may also force Pennsylvania State University to reshape future admissions decisions, according to Madhavan Swaminathan, head of Penn State’s electrical engineering department and director of the Center for Heterogeneous Integration of Micro Electronic Systems (CHIMES), a semiconductor research lab. In 2023, the Defense Advanced Research Projects Agency (DARPA) and industry partners awarded CHIMES a five-year $32.7 million grant. But in late 2025, the agency pulled its final year of funding from the center, citing a shift in priorities from microelectronics to photonics, Swaminathan says. As a result, CHIMES’ annual budget, which supports research assistantships for roughly 100 engineering graduate students, the majority pursuing Ph.D.s, will fall from $7 million in 2026 to $3.5 million in 2027. If these constraints persist, Penn State’s engineering department may reduce the number of doctoral students it supports. In a statement, a DARPA spokesperson told IEEE Spectrum: “Basic research is central to identifying world-changing technologies, and DARPA remains committed to engaging academic institutions in our program research. By design, a DARPA program typically lasts about 3 to 5 years. Once we establish proof of concept, we transition the technology for further development and turn our attention to other challenging areas of research.” Penn State’s enrollment numbers reflect Swaminathan’s caution. He says the electrical engineering Ph.D. cohort shrank from 28 students in 2024 to 15 students in 2025. Applications show a similar pattern. After rising from 195 in 2024 to 247 in 2025, Ph.D. applications fell roughly 30 percent to 174 for the upcoming 2026 cohort, a sign that prospective students may be wary of applying to U.S. programs. Immigration restrictions and application declinesIn late January, the Trump administration announced it had paused visa approvals for citizens of 75 countries. Months earlier, the administration proposed new restrictions on student visas, including a four-year cap. For Texas A&M University’s graduate electrical and computer engineering programs, up to 80 percent of applicants each year are international students, according to Narasimha Annapareddy, professor and head of the department. Annapareddy says applications for the fall 2026 Ph.D. cohort have dropped by roughly 50 percent. Annapareddy says the United States is “sending a message that migration is going to be more difficult in the future.” Foreign students often pursue degrees in the U.S. not only for academic training, he says, but to build long-term careers and lives in the country. Fewer applications from international students mean that the university forgoes a “driven and hungry” segment of the applicant pool who are highly qualified in technical fields. “The fear is that at some point, all this government money will be taken away.”— Subramanian Iyer, UC Los AngelesAt the University of Southern California, the decline is more moderate. The freshman Ph.D. class fell from about 90 students in 2024 to roughly 70 in 2025, a reduction of 22 percent, according to Richard Leahy, department chair of USC’s Ming Hsieh Department of Electrical and Computer Engineering. While Leahy says applications are down modestly overall, domestic applications have increased by roughly 15 percent. Beyond immigration restrictions, international students, particularly from countries such as India and China, may be staying in their home countries as their technology sectors expand.“A lot of those students that would normally have come to the U.S. are now taking very good jobs working in the AI industry and other areas,” Leahy says. “There are a lot more opportunities now.”Workforce pipeline strainsSome faculty say shrinking cohorts could erode the tech workforce if the pattern continues.At UC Los Angeles, Iyer describes a doctoral ecosystem built on a chain of mentorship. Among the roughly 25 students in his lab, senior doctoral students mentor junior Ph.D. candidates, who in turn guide master’s students and undergraduates. The system depends on overlapping cohorts. Reducing the number of students hired weakens that overlap and the trickle-down benefits of the mentorship model that keeps labs functioning.The real benefit of the university system isn’t just the teaching but also “the community that you build,” Iyer says. “As you decrease admissions, this will disappear.”At Penn State, Swaminathan points to specialization as key to a strong workforce. Many doctoral students train in semiconductor engineering, feeding expert talent into the domestic chip industry. If enrollment continues to shrink over the next few years, Swaminathan says, companies may need to hire students with bachelor’s or master’s degrees, who might lack the necessary skills required to design and innovate new chips. “Without that specialization, there’s only so much one can do,” Swaminathan says. The industry–academia gap Not all departments are shrinking. At the University of Texas at Austin, overall enrollment has remained relatively steady, according to Diana Marculescu, chair of UT Austin’s Chandra Family Department of Electrical and Computer Engineering. While she says recent fluctuations aren’t raising alarms, her concern lies more with alignment between research and industry. Doctoral students often train according to current grant priorities, she says. But by the time graduates enter the job market four to six years later, their specialization may not align neatly with open roles. That creates friction in the talent pipeline.“That lack of connection might be problematic,” Marculescu says. She argues that closer collaboration between universities and the private sector could help create stronger feedback loops between hiring needs and academic research priorities.For now, USC’s Leahy says Ph.D. graduates remain in high demand, and the current shifts have not yet translated into measurable workforce shortages. “We should be concerned about the number of Ph.D.s,” he says. “But there isn’t a crisis at this point.”

25.03.2026 13:00:05

Technologie a věda
2 dny

Last week’s Nvidia GTC conference highlighted new chip architectures to power AI. But as the chips become faster and more powerful, the remainder of data center infrastructure is playing catch-up. The power-delivery community is responding: Announcements from Delta, Eaton, and Vertiv showcased new designs for the AI era. Complex and inefficient AC-to-DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says Chris Thompson, vice president of advanced technology and global microgrids at Vertiv.AC-to-DC Conversion ChallengesToday, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1 to 35 kilovolts), is stepped down to low-voltage AC (480 or 415 volts) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.“The double conversion process ensures the output AC is clean, stable, and suitable for data center servers,” says Luiz Fernando Huet de Bacellar, vice president of engineering and technology at Eaton.That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 megawatt. At that scale, the energy losses, current levels, and copper requirements of AC-to-DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable. According to an Nvidia blog, a 1-MW rack could require as much as 200 kilograms of copper busbar. For a 1-gigawatt data center, it could amount to 200,000 kg of copper. Benefits of High-Voltage DC PowerBy converting 13.8-kV AC grid power directly to 800 V DC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power-supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Fernando.Switching from 415-V AC to 800-V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for gigawatt-scale facilities.“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800-V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-to-DC converters step that voltage down for GPUs and CPUs.”A report from technology advisory group Omdia claims that higher voltage DC data centers have already appeared in China. In the Americas, the Mt. Diablo Initiative (a collaboration among Meta, Microsoft, and the Open Compute Project) is a 400-V DC rack power distribution experiment.Innovations in DC Power SystemsA handful of vendors are trying to get ahead of the game. Vertiv’s 800-V DC ecosystem that integrates with Nvidia Vera Rubin Ultra Kyber platforms will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800-V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800-V DC in-row 660-kW power racks with a total of 480 kW of embedded battery backup units. And, SolarEdge is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.But much of the industry is far behind. Patrick Hughes, senior vice president of strategy, technical, and industry affairs for the National Electrical Manufacturers Association, says most innovation is happening at the 400-V DC level, though some are preparing 800-V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”

24.03.2026 16:00:05

Technologie a věda
2 dny

The undying thirst for smarter (historically, that means larger) AI models and greater adoption of the ones we already have has led to an explosion in data-center construction projects, unparalleled both in number and scale. Chief among them is Meta’s planned 5-gigawatt data center in Louisiana, called Hyperion, announced in June of 2025. Meta CEO Mark Zuckerberg said Hyperion will “cover a significant part of the footprint of Manhattan,” and the first phase—a 2-GW version—will be completed by 2030.Though the project’s stated 5-GW scale is the largest among its peers, it’s just one of several dozen similar projects now underway. According to Michael Guckes, chief economist at construction-software company ConstructConnect, spending on data centers topped US $27 billion by July of 2025 and, once the full-year figures are tallied, will easily exceed $60 billion. Hyperion alone accounts for about a quarter of that.For the engineers assigned to bring these projects to life, the mix of challenges involved represent a unique moment. The world’s largest tech companies are opening their wallets to pay for new innovations in compute, cooling, and network technology designed to operate at a scale that would’ve seemed absurd five years ago.At the same time, the breakneck pace of building comes paired with serious problems. Modern data-center construction frequently requires an influx of temporary workers and sharply increases noise, traffic, pollution, and often local electricity prices. And the environmental toll remains a concern long after facilities are built due to the unprecedented 24/7 energy demands of AI data centers which, according to one recent study, could emit the equivalent of tens of millions of tonnes of CO2 annually in the United States alone.Regardless of these issues, large AI companies, and the engineers they hire, are going full steam ahead on giant data-center construction. So, what does it really take to build an unprecedentedly large data center?AI Rewrites Building DesignThe stereotypical data-center building rests on a reinforced concrete slab foundation. That’s paired with a steel skeleton and poured concrete wall panels. The finished building is called a “shell,” a term that implies the structure itself is a secondary concern. Meta has even used gigantic tents to throw up temporary data centers.Still, the scale of the largest AI data centers brings unique challenges. “The biggest challenge is often what’s under the surface. Unstable, corrosive, or expansive soils can lead to delays and require serious intervention,” says Robert Haley, vice president at construction consulting firm Jacobs. Amanda Carter, a senior technical lead at Stantec, said a soil’s thermal conductivity is also important, as most electrical infrastructure is placed underground. “If the soil has high thermal resistivity, it’s going to be difficult to dissipate [heat].” Engineers may take hundreds or thousands of soil samples before construction can begin.GPUsModern AI data centers often use rack-scale systems, such as the Nvidia GB200 NVL72, which occupy a single data-center rack. Each rack contains 72 GPUs, 36 CPUs, and up to 13.4 terabytes of GPU memory. The racks measure over 2.2 meters tall and weigh over one and a half tonnes, forcing AI data centers to use thicker concrete with more reinforcement to bear the load.A single GB200 rack can use up to 120 kilowatts. If Hyperion meets its 5-gigawatt goals, the data-center campus could include over 41,000 rack-scale systems, for a total of more than 3 million GPUs. The final number of GPUs used by Hyperion is likely to be less than that, though only because future GPUs will be larger, more capable, and use more power.MoneyAccording to ConstructConnect, spending on data centers neared US $27 billion through July of 2025 and, according to the latest data, will tally close to $60 billion through the end of the year. Meta’s Hyperion project is a big slice of the pie, at $10 billion.Data-center spending has become an important prop for the construction industry, which is seeing reduced demand in other areas, such as residential construction and public infrastructure. ConstructConnect’s third quarter 2025 financial report stated that the quarter’s decline “would have been far more severe without an $11 billion surge in data center starts.”There’s apparently no shortage of eligible sites, however, as both the number of data centers under construction, and the money spent on them, has skyrocketed. The spending has allowed companies building data centers to throw out the rule book. Prior to the AI boom, most data centers relied on tried-and-true designs that prioritized inexpensive and efficient construction. Big tech’s willingness to spend has shifted the focus to speed and scale.The loose purse strings open the door to larger and more robust prefabricated concrete wall and floor panels. Doug Bevier, director of development at Clark Pacific, says some concrete floor panels may now span up to 23 meters and need to handle floor loads up to 3,000 kilograms per square meter, which is more than twice the load international building codes normally define for manufacturing and industry. In some cases, the concrete panels must be custom-made for a project, an expensive step that the economics of pre-AI data centers rarely justified.Simultaneously, the time scale for projects is also compressed: Jamie McGrath, senior vice president of data-center operations at Crusoe, says the company is delivering projects in “about 12 months,” compared to 30 to 36 months before. Not all projects are proceeding at that pace, but speed is universally a priority.That makes it difficult to coordinate the labor and materials required. Meta’s Hyperion site, located in rural Richland Parish, Louisiana, is emblematic of this challenge. As reported by NOLA.com, at least 5,000 temporary workers have flocked to the area, which has only about 20,000 permanent residents. These workers earn above-average wages and bring a short-term boost for some local businesses, such as restaurants and convenience stores. However, they have also spurred complaints from residents about traffic and construction noise and pollution.This friction with residents includes not only these obvious impacts, but also things you might not immediately suspect, such as light pollution caused by around-the-clock schedules. Also significant are changes to local water tables and runoff, which can reduce water quality for neighbors who rely on well water. These issues have motivated a few U.S. cities to enact data-center bans.Data Centers Often Go BYOP (bring your own power)Meta’s Richland Parish site also highlights a problem that’s priority No. 1 for both AI data centers and their critics: power.Data centers have always drawn large amounts of power, which nudged data-center construction to cluster in hubs where local utilities were responsive to their demands. Virginia’s electric utility, Dominion Energy, met demand with agreements to build new infrastructure, often with a focus on renewable energy.The power demands of the largest AI data centers, though, have caught even the most responsive utilities off guard. A report from the Lawrence Berkeley National Laboratory, in California, estimated the entire U.S. data-center industry consumed an average load of roughly 8 GW of power in 2014. Today, the largest AI data-center campuses are built to handle up to a gigawatt each, and Meta’s Hyperion is projected to require 5 GW.“Data centers are exasperating issues for a lot of utilities,” says Abbe Ramanan, project director at the Clean Energy Group, a Vermont-based nonprofit.Ramanan explains that utilities often use “peaker plants” to cope with extra demand. They’re usually older, less efficient fossil-fuel plants which, because of their high cost to operate and carbon output, were due for retirement. But Ramanan says increased electricity demand has kept them in service.Meta secured power for Hyperion by negotiating with Entergy, Louisiana’s electric utility, for construction of three new gas-turbine power plants. Two will be located near the Richland Parish site, while a third will be located in southeast Louisiana.Entergy frames the new plants as a win for the state. “A core pillar of Entergy and Meta’s agreement is that Meta pays for the full cost of the utility infrastructure,” says Daniel Kline, director of power-delivery planning and policy at Entergy. The utility expects that “customer bills will be lower than they otherwise would have been.” That would prove an exception, as a recent report from Bloomberg found electricity rates in regions with data centers are more likely to increase than in regions without.CO2Research published in Nature in 2025 projects that data-center emissions will range from 24 million to 44 million CO2-equivalent metric tonnes annually through 2030 in the United States alone. While some materials used in data centers, such as concrete, lead to significant emissions, the majority of these emissions will result from the high energy demands of AI servers.Estimating the carbon emissions of Hyperion is difficult, as the project won’t be completed until 2030. Assuming that the three new natural gas plants that are planned for construction as part of the project produce emissions typical for their type, however, the plants could lead to full life-cycle emissions of between 4 million and 10 million metric tons of CO2 annually—roughly equivalent to the annual emissions of a country like Latvia.ConcreteData centers are typically built from concrete, with steel used as a skeleton to reinforce and shape the concrete shell. While the foundation is often poured concrete, the walls and floors are most often built from prefabricated concrete panels that can span up to 23 meters. Floors use a reinforced T-shape, similar to a steel girder, measuring up to 1.2 meters across at its thickest point. The largest data centers include hundreds of these concrete panels.The America Cement Association projects that the current surge in building will require 1 million tonnes of cement over the next three years, though that’s still a tiny fraction of the overall cement industry, which weighed in at roughly 103 million tonnes in 2024.The plants, which will generate a combined 2.26 GW, will use combined-cycle gas turbines that recapture waste heat from exhaust. This boosts thermal efficiency to 60 percent and beyond, meaning more fuel is converted to useful energy. Simple-cycle turbines, by contrast, vent the exhaust, which lowers efficiency to around 40 percent.Even so, total life-cycle emissions for the Hyperion plants could range from 4 million to over 10 million tonnes of CO2 each year, depending on how frequently the plants are put in use and the final efficiency benchmarks once built. On the high end, that’s as much CO2 as produced by over 2 million passenger cars. Fortunately, not all of Meta’s data centers take the same approach to power. The company has announced a plan to power Prometheus, a large data-center project in Ohio scheduled to come online before the end of 2026, with nuclear energy.But other big tech companies, spurred by the need to build data centers quickly, are taking a less efficient approach.xAI’s Colossus 2, located in Memphis, is the most extreme example. The company trucked dozens of temporary gas-turbine generators to power the site located in a suburban neighborhood. OpenAI, meanwhile, has gas turbines capable of generating up to 300 megawatts at its new Stargate data center in Abilene, Texas, slated to open later in 2026. Both use simple-cycle turbines with a much lower efficiency rating than the combined-cycle plants Entergy will build to power Hyperion.Demand for gas turbines is so intense, in fact, that wait times for new turbines are up to seven years. Some data centers are turning toward refurbished jet engines to obtain the turbines they need.AI Racks Tip the ScalesThe demand for new, reliable power is driven by the power-hungry GPUs inside modern AI data centers.In January of 2025, Mark Zuckerberg announced in a post on Facebook that Meta planned to end 2025 with at least 1.3 million GPUs in service. OpenAI’s Stargate data center plans to use over 450,000 Nvidia GB200 GPUs, and xAI’s Colossus 2, an expansion of Colossus, is built to accommodate over 550,000 GPUs.GPUs, which remain by far the most popular for AI workloads, are bundled into human-scale monoliths of steel and silicon which, much like the data centers built to house them, are rapidly growing in weight, complexity, and power consumption.MemoryIn addition to raw compute performance, Nvidia GB200 NVL72 racks also require huge amounts of memory. An Nvidia GB200 NVL72 rack may include up to 13.4 terabytes of high-bandwidth memory, which implies a data-center campus at Hyperion’s scale will require at least several dozen petabytes.The immense demand has sent memory prices soaring: The price of DRAM, specifically DDR5, has increased 172 percent in 2025.PowerHyperion is expected to use 5 gigawatts of power across 11 buildings, which works out to just under 500 megawatts per building, assuming each will be similar to its siblings. That’s enough to power roughly 4.2 million U.S. homes.Just one Hyperion data center built at the Richland Parish site will consume twice as much power as xAI’s Colossus which, at the time of its completion in the summer of 2024, was among the largest data centers yet built.Nvidia’s GB200 NVL72—a rack-scale system—is currently a leading choice for AI data centers. A single GB200 rack contains 72 GPUs, 36 CPUs, and up to 17 terabytes of memory. It measures 2.2 meters tall, tips the scales at up to 1,553 kilograms, and consumes about 120 kilowatts—as much as around 100 U.S. homes. And this, according to Nvidia, is just the beginning. The company anticipates future racks could consume up to a megawatt each.Viktor Petik, senior vice president of infrastructure solutions at Vertiv, says the rapid change in rack-scale AI systems has forced data centers to adapt. “AI racks consume far more power and weigh more than their predecessors,” says Petik. He adds that data centers must supply racks with multiple power feeds, without taking up extra space.The new power demands from rack-scale systems have consequences that are reflected in the design of the data center—even its footprint.In 2022 Meta broke ground on a new data center at a campus in Temple, Texas. According to SemiAnalysis, which studies AI data centers, construction began with the intent to build the data center in an H-shaped configuration common to other Meta data centers.LANDMeta CEO Mark Zuckerberg kicked off the buzz around Hyperion by saying it would cover a large chunk of Manhattan. Many took that to mean Hyperion would be a single building of that size, which isn’t correct. Hyperion will actually be a cluster of data centers—11 are currently planned—with over 370,000 square meters of floor space. That’s a lot smaller even than New York City’s Central Park, which covers 6 percent of Manhattan.Meta has room to grow, however. The Richland Parish site spans 14.7 million square meters in total, which is about a quarter the area of Manhattan. And the 370,000 square meters of floor space Hyperion is expected to provide doesn’t include external infrastructure, such as the three new combined-cycle gas power plants Louisiana utility Entergy is building to power the project.Construction was paused midway in December of 2022, however, as part of a company-wide review of its data-center infrastructure. Meta decided to knock down the structure it had built and start from scratch. The reasons for this decision were never made public, but analysts believe it was due to the old design’s inability to deliver sufficient electricity to new, power-hungry AI racks. Construction resumed in 2023.Meta’s replacement ditches the H-shaped building for simple, long, rectangular structures, each flanked by rows of gas-turbine generators. While Meta’s plans are subject to change, Hyperion is currently expected to comprise 11 rectangular data centers, each packed with hundreds of thousands of GPUs, spread across the 13.6-square-kilometer Richland Parish campus.Cooling, and Connecting, at ScaleNvidia’s ultradense AI GPU racks are changing data centers not only with their weight, and power draw, but also with their intense cooling and bandwidth requirements.Data centers traditionally use air cooling, but that approach has reached its limits. “Air as a cooling medium is inherently inferior,” says Poh Seng Lee, head of CoolestLAB, a cooling research group at the National University of Singapore.Instead, going forward, GPUs will rely on liquid cooling. However, that adds a new layer of complexity. “It’s all the way to the facilities level,” says Lee. “You need pumps, which we call a coolant distribution unit. The CDU will be connected to racks using an elaborate piping network. And it needs to be designed for redundancy.” On the rack, pipes connect to cold plates mounted atop every GPU; outside the data-center shell, pipes route through evaporation cooling units. Lee says retrofitting an air-cooled data center is possible but expensive.The networking used by AI data centers is also changing to cope with new requirements. Traditional data centers were positioned near network hubs for easy access to the global internet. AI data centers, though, are more concerned with networks of GPUs.These connections must sustain high bandwidth with impeccable reliability. Mark Bieberich, a vice president at network infrastructure company Ciena, says its latest fiber-optic transceiver technology, WaveLogic 6, can provide up to 1.6 terabytes per second of bandwidth per wavelength. A single fiber can support 48 wavelengths in total, and Ciena’s largest customers have hundreds of fiber pairs, placing total bandwidth in the thousands of terabits per second.This is a point where the scale of Meta’s Hyperion, and other large AI data centers, can be deceptive. It seems to imply the physical size of a single data center is what matters. But rather than being a single building, Hyperion is actually a set of buildings connected by high-speed fiber-optics.“Interconnecting data centers is absolutely essential,” says Bieberich. “You could think about it as one logical AI training facility, but with geographically distributed facilities.” Nvidia has taken to calling this “scale across,” to contrast it with the idea that data centers must “scale up” to larger singular buildings.The Big but Hazy FutureThe full scale of the challenges that face Hyperion, and other future AI data centers of similar scale, remain hazy. Nvidia has yet to introduce the rack-scale AI GPU systems it will host. How much power will it demand? What type of cooling will it require? How much bandwidth must be provided? These can only be estimated.In the absence of details, the gravity of AI data-center design is pulled toward one certainty: It must be big. New data-center designers are rewriting their rule book to handle power, cooling, and network infrastructure at a scale that would’ve seemed ridiculous five years ago.This innovation is fueled by big tech’s fat wallet, which shelled out tens of billions of dollars in 2025 alone, leading to questions about whether the spending is sustainable. For the engineers in the trenches of data-center design, though, it’s viewed as an opportunity to make the impossible possible. “I tell my engineers, this is peak. We’re being engineers. We’re being asked complicated questions,” says Stantec’s Carter. “We haven’t got to do that in a long time.” This article appears in the April 2026 print issue.

24.03.2026 15:00:05

Technologie a věda
2 dny

WHEN KYIV-BORN ENGINEER Yaroslav Azhnyuk thinks about the future, his mind conjures up dystopian images. He talks about “swarms of autonomous drones carrying other autonomous drones to protect them against autonomous drones, which are trying to intercept them, controlled by AI agents overseen by a human general somewhere.” He also imagines flotillas of autonomous submarines, each carrying hundreds of drones, suddenly emerging off the coast of California or Great Britain and discharging their cargoes en masse to the sky.“How do you protect from that?” he asks as we speak in late December 2025; me at my quiet home office in London, he in Kyiv, which is bracing for another wave of missile attacks.Azhnyuk is not an alarmist. He cofounded and was formerly CEO of Petcube, a California-based company that uses smart cameras and an app to let pet owners keep an eye on their beloved creatures left alone at home. A self-described “liberal guy who didn’t even receive military training,” Azhnyuk changed his mind about developing military tech in the months following the Russian invasion of Ukraine in February 2022. By 2023, he had relinquished his CEO role at Petcube to do what many Ukrainian technologists have done—to help defend his country against a mightier aggressor.It took a while for him to figure out what, exactly, he should be doing. He didn’t join the military, but through friends on the front line, he witnessed how, out of desperation, Ukrainian troops turned to off-the-shelf consumer drones to make up for their country’s lack of artillery.Ukrainian troops first began using drones for battlefield surveillance, but within a few months they figured out how to strap explosives onto them and turn them into effective, low-cost killing machines. Little did they know they were fomenting a revolution in warfare. The Ukrainian robotics company The Fourth Law produces an autonomy module [above] that uses optics and AI to guide a drone to its target. Yaroslav Azhnyuk [top, in light shirt], founder and CEO of The Fourth Law, describes a developmental drone with autonomous capabilities to Ukrainian President Volodymyr Zelenskyy and German Chancellor Olaf Scholz.Top: THE PRESIDENTIAL OFFICE OF UKRAINE; Bottom: THE FOURTH LAWThat revolution was on display last month, as the U.S. and Israel went to war with Iran. It soon became clear that attack drones are being extensively used by both sides. Iran, for example, is relying heavily on the Shahed drones that the country invented and that are now also being manufactured in Russia and launched by the thousands every month against Ukraine.A thorough analysis of the Middle East conflict will take some time to emerge. And so to understand the direction of this new way of war, look to Ukraine, where its next phase—autonomy—is already starting to come into view. Outnumbered by the Russians and facing increasingly sophisticated jamming and spoofing aimed at causing the drones to veer off course or fall out of the sky, Ukrainian technologists realized as early as 2023 that what could really win the war was autonomy. Autonomous operation means a drone isn’t being flown by a remote pilot, and therefore there’s no communications link to that pilot that can be severed or spoofed, rendering the drone useless.By late 2023, Azhnyuk set out to help make that vision a reality. He founded two companies, The Fourth Law and Odd Systems, the first to develop AI algorithms to help drones overcome jamming during final approach, the second to build thermal cameras to help those drones better sense their surroundings.“I moved from making devices that throw treats to dogs to making devices that throw explosives on Russian occupants,” Azhnyuk quips.Since then, The Fourth Law has dispatched “more than thousands” of autonomy modules to troops in eastern Ukraine (it declines to give a more specific figure), which can be retrofitted on existing drones to take over navigation during the final approach to the target. Azhnyuk says the autonomy modules, worth around US $50, increase the drone-strike success rate by up to four times that of purely operator-controlled drones.And that is just the beginning. Azhnyuk is one of thousands of developers, including some who relocated from Western countries, who are applying their skills and other resources to advancing the drone technology that is the defining characteristic of the war in Ukraine. This eclectic group of startups and founders includes Eric Schmidt, the former Google CEO, whose company Swift Beat is churning out autonomous drones and modules for Ukrainian forces. The frenetic pace of tech development is helping a scrappy, innovative underdog hold at bay a much larger and better-equipped foe.All of this development is careening toward AI-based systems that enable drones to navigate by recognizing features in the terrain, lock on to and chase targets without an operator’s guidance, and eventually exchange information with each other through mesh networks, forming self-organizing robotic kamikaze swarms. Such an attack swarm would be commanded by a single operator from a safe distance.According to some reports, autonomous swarming technology is also being developed for sea drones. Ukraine has had some notable successes with sea drones, which have reportedly destroyed or damaged around a dozen Russian vessels. The Skynode X system, from Auterion, provides a degree of autonomy to a drone.AUTERIONFor Ukraine, swarming can solve a major problem that puts the nation at a disadvantage against Russia—the lack of personnel. Autonomy is “the single most impactful defense technology of this century,” says Azhnyuk. “The moment this happens, you shift from a manpower challenge to a production challenge, which is much more manageable,” he adds.The autonomous warfare future envisioned by Azhnyuk and others is not yet a reality. But Marc Lange, a German defense analyst and business strategist, believes that “an inflection point” is already in view. Beyond it, “things will be so dramatically different,” he says.“Ukraine pretty rapidly realized that if the operator-to-drone ratio can be shifted from one-to-one to one-to-many, that creates great economies of scale and an amazing cost exchange ratio,” Lange adds. “The moment one operator can launch 100, 50, or even just 20 drones at once, this completely changes the economics of the war.”Drones With a View For a while, jammers that sever the radio links between drones and operators or that spoof GPS receivers were able to provide fairly reliable defense against human-controlled first-person-view attack drones (FPVs). But as autonomous navigation progressed, those electronic shields have gradually become less effective. Defenders must now contend with unjammable drones—ones that are attached to hair-thin optical fibers or that are capable of finding their way to their targets without external guidance. In this emerging struggle, the defenders’ track records aren’t very encouraging: The typical countermeasure is to try to shoot down the attacking drone with a service weapon. It’s rarely successful. A truck outfitted with signal-jamming gear drives under antidrone nets near Oleksandriya, in eastern Ukraine, on 2 October 2025.ED JONES/AFP/GETTY IMAGES“The attackers gain an immense advantage from unmanned systems,” says Lange. “You can have a drone pop up from anywhere and it can wreak havoc. But from autonomy, they gain even more.”The self-navigating drones rely on image-recognition algorithms that have been around for over a decade, says Lange. And the mass deployments of drones on Ukrainian battlefields are enabling both Russian and Ukrainian technologists to create huge datasets that improve the training and precision of those AI algorithms. A Ukrainian land robot, the Ravlyk, can be outfitted with a machine gun.While uncrewed aerial vehicles (UAVs) have received the most attention, the Ukrainian military is also deploying dozens of different kinds of drones on land and sea. Ukraine, struggling with the shortage of infantry personnel, began working on replacing a portion of human soldiers with wheeled ground robots in 2024. As of early 2026, thousands of ground robots are crawling across the gray zone along the front line in Eastern Ukraine. Most are used to deliver supplies to the front line or to help evacuate the wounded, but some “killer” ground robots fitted with turrets and remotely controlled machine guns have also been tested.In mid-February, Ukrainian authorities released a video of a Ukrainian ground robot using its thermal camera to detect a Russian soldier in the dark of the night and then kill the invader with a round from a heavy machine gun. So far these robots are mostly controlled by a human operator, but the makers of these uncrewed ground vehicles say their systems are capable of basic autonomous operations, such as returning to base when radio connection is lost. The goal is to enable them to swarm so that one operator controls not one, but a whole herd of mesh-connected killer robots.But Bryan Clark, senior fellow and director of the Center for Defense Concepts and Technology at the Hudson Institute, questions how quickly ground robots’ abilities can progress. “Ground environments are very difficult to navigate in because of the terrain you have to address,” he says. “The line of sight for the sensors on the ground vehicles is really constrained because of terrain, whereas an air vehicle can see everything around it.”To achieve autonomy, maritime drones, too, will require navigational approaches beyond AI-based image recognition, possibly based on star positions or electronic signals from radios and cell towers that are within reach, says Clark. Such technologies are still being developed or are in a relatively early operational stage.How the Shaheds Got BetterRussia is not lagging behind. In fact, some analysts believe its autonomous systems may be slightly ahead of Ukraine’s. For a good example of the Russian military’s rapid evolution, they say, consider the long-range Iranian-designed Shahed drones. Since 2022, Russia has been using them to attack Ukrainian cities and other targets hundreds of kilometers from the front line. “At the beginning, Shaheds just had a frame, a motor, and an inertial navigation system,” Oleksii Solntsev, CEO of Ukrainian defense tech startup MaXon Systems, tells me. “They used to be imprecise and pretty stupid. But they are becoming more and more autonomous.” Solntsev founded MaXon Systems in late 2024 to help protect Ukrainian civilians from the growing threat of Shahed raids. A Russian Geran-2 drone, based on the Iranian Shahed-136, flies over Kyiv during an attack on 27 December 2025.SERGEI SUPINSKY/AFP/GETTY IMAGESFirst produced in Iran in the 2010s, Shaheds can carry 90-kilogram warheads up to 650 km (50-kg warheads can go twice as far). They cost around $35,000 per unit, compared to a couple of million dollars, at least, for a ballistic missile. The low cost allows Russia to manufacture Shaheds in high quantities, unleashing entire fleets onto Ukrainian cities and infrastructure almost every night.The early Shaheds were able to reach a preprogrammed location based on satellite-navigation coordinates. Even one of these early models could frequently overcome the jamming of satellite-navigation signals with the help of an onboard inertial navigation unit. This was essentially a dead-reckoning system of accelerators and gyroscopes that estimate the drone’s position from continual measurements of its motions. In the Donetsk Region, on 15 August 2025, a Ukrainian soldier hunts for Shaheds and other drones with a thermalimaging system attached to a ZU23 23-millimeter antiaircraft gun.KOSTYANTYN LIBEROV/LIBKOS/GETTY IMAGESUkrainian defense forces learned to down Shaheds with heavy machine guns, but as Russia continued to innovate, the daily onslaughts started to become increasingly effective.Today’s Shaheds fly faster and higher, and therefore are more difficult to detect and take down. Between January 2024 and August 2025, the number of Shaheds and Shahed-type attack drones launched by Russia into Ukraine per month increased more than tenfold, from 334 to more than 4,000. In 2025, Ukraine found AI-enabling Nvidia chipsets in wreckages of Shaheds, as well as thermal-vision modules capable of locking onto targets at night.“Now, they are interconnected, which allows them to exchange information with each other,” Solntsev says. “They also have cameras that allow them to autonomously navigate to objects. Soon they will be able to tell each other to avoid a jammed region or an area where one of them got intercepted.”These Russian-manufactured Shaheds, which Russian forces call Geran-2s, are thought to be more capable than the garden variety Shahed-136s that Iran has lately been launching against targets throughout the Middle East. Even the relatively primitive Shahed-136s have done considerable damage, according to press accounts.Those Shahed successes may accrue, at least in part, from the fact that the United States and Israel lack Ukraine’s long experience with fending them off. In just two days in early March, upward of a thousand drones, mostly Shaheds, were launched against U.S. and Israeli targets, with hundreds of them reportedly finding their marks.One attack, caught on videotape, shows a Shahed destroying a radar dome at the U.S. navy base in Manama, Bahrain. U.S. forces were understood to be attempting to fend off the drones by striking launch platforms, dispatching fighter aircraft to shoot them down, and by using some extremely costly air-defense interceptors, including ones meant to down ballistic missiles. On 4 March, CNN reported that in a congressional briefing the day before, top U.S. defense officials, including Secretary of Defense Pete Hegseth, acknowledged that U.S. air defenses weren’t keeping up with the onslaught of Shahed drones. Russian V2U attack drones are outfitted with Nvidia processors and run computer-vision software and AI algorithms to enable the drones to navigate autonomously.GUR OF THE MINISTRY OF DEFENSE OF UKRAINERussia is also starting to field a newer generation of attack drones. One of these, the V2U, has been used to strike targets in the Sumy region of northeastern Ukraine. The V2U drones are outfitted with Nvidia Jetson Orin processors and run computer-­vision software and AI algorithms that allow the drones to navigate even where satellite navigation is jammed.The sale of Nvidia chips to Russia is banned under U.S. sanctions against the country. However, press reports suggest that the chips are getting to Russia via intermediaries in India.Antidrone Systems Step UpMaXon Systems is one of several companies working to fend off the nightly drone onslaught. Within one year, the company developed and battle-tested a Shahed interception system that hints at the sci-fi future envisioned by Azhnyuk. For a system to be capable of reliably defending against autonomous weaponry, it, too, needs to be autonomous.MaXon’s solution consists of ground turrets scanning the sky with infrared sensors, with additional input from a network of radars that detects approaching Shahed drones at distances of, typically, 12 to 16 km. The turrets fire autonomous fixed-winged interceptor drones, fitted with explosive warheads, toward the approaching Shaheds at speeds of nearly 300 km/h. To boost the chances of successful interception, MaXon is also fielding an airborne anti-Shahed fortification system consisting of helium-filled aerostats hovering above the city that dispatch the interceptors from a higher altitude.“We are trying to increase the level of automation of the system compared to existing solutions,” says Solntsev. “We need automatic detection, automatic takeoff, and automatic mid-track guidance so that we can guide the interceptor before it can itself flock the target.” An interceptor drone, part of the U.S. MEROPS defensive system, is tested in Poland on 18 November 2025.WOJTEK RADWANSKI/AFP/GETTY IMAGESIn November 2025, the Ukrainian military announced it had been conducting successful trials of the Merops Shahed drone interceptor system developed by the U.S. startup Project Eagle, another of former Google CEO Eric Schmidt’s Ukraine defense ventures. Like the MaXon gear, the system can operate largely autonomously and has so far downed over 1,000 Shaheds.What Works in the Lab Doesn’t Necessarily Fly on the Battlefield Despite the progress on both sides, analysts say that the kind of robotic warfare imagined by Azhnyuk won’t be a reality for years.“The software for drone collaboration is there,” says Kate Bondar, a former policy advisor for the Ukrainian government and currently a research fellow at the U.S. Center for Strategic and International Studies. “Drones can fly in labs, but in real life, [the forces] are afraid to deploy them because the risk of a mistake is too high,” she adds. Ukrainian soldiers watch a GOR reconnaissance drone take to the sky near Pokrovsk in the Donetsk region, on 10 March 2025.ANDRIY DUBCHAK/FRONTLINER/GETTY IMAGESIn Bondar’s view, powerful AI-equipped drones won’t be deployed in large numbers given the current prices for high-end processors and other advanced components. And, she adds, the more autonomous the system needs to be, the more expensive are the processors and sensors it must have. “For these cheap attack drones that fly only once, you don’t install a high-resolution camera that [has] the resolution for AI to see properly,” she says. “[You install] the cheapest camera. You don’t want expensive chips that can run AI algorithms either. Until we can achieve this balance of technological sophistication, when a system can conduct a mission but at the lowest price possible, it won’t be deployed en masse.”While existing AI systems are doing a good job recognizing and following large objects like Shaheds or tanks, experts question their ability to reliably distinguish and pursue smaller and more nimble or inconspicuous targets. “When we’re getting into more specific questions, like can it distinguish a Russian soldier from a Ukrainian soldier or at least a soldier from a civilian? The answer is no,” says Bondar. “Also, it’s one thing to track a tank, and it’s another to track infantrymen riding buggies and motorcycles that are moving very fast. That’s really challenging for AI to track and strike precisely.”Clark, at the Hudson Institute, says that although the AI algorithms used to guide the Russian and Ukrainian drones are “pretty good,” they rely on information provided bysensors that “aren’t good enough.” “You need multiphenomenology sensors that are able to look at infrared and visual and, in some cases, different parts of the infrared spectrum to be able to figure out if something is a decoy or real target,” he says.German defense analyst Lange agrees that right now, battlefield AI image-recognition systems are too easily fooled. “If you compress reality into a 2D image, a lot of things can be easily camouflaged—like what Russia did recently, when they started drawing birds on the back of their drones,” he says.Autonomy Remains Elusive on the Ground and at Sea, TooTo make Ukraine’s emerging uncrewed ground vehicles (UGVs) equally self-sufficient will be an even greater task, in Clark’s view. Still, Bondar expects major advances to materialize within the next several years, even if humans are still going to be part of the decision-making loop. A mobile electronic-warfare system built by PiranhaTech is demonstrated near Kyiv on 21 October 2025.DANYLO ANTONIUK/ANADOLU/GETTY IMAGES“I think in two or three years, we will have pretty good full autonomy, at least in good weather conditions,” she says, referring to aerial drones in particular. “Humans will still be in the loop for some years, simply because there are so many unpredictable situations when you need an intervention. We won’t be able to fully rely on the machine for at least another 10 or 15 years.”Ukrainian defenders are apprehensive about that autonomous future. The boom of drone innovation has come hand in hand with the development of sophisticated jamming and radio-frequency detection systems. But a lot of that innovation will become obsolete once the pendulum swings away from human control. Ukrainians got their first taste of dealing with unjammable drones in mid-2024, when Russia began rolling out fiber-optic tethered drones. Now they have to brace for a threat on a much larger scale. An experimental drone is demonstrated at the Brave1 defense-tech incubator in Kyiv.DANYLO DUBCHAK/FRONTLINER/GETTY IMAGES“Today, we have a situation where we have lots of signals on the battlefield, but in the near future, in maybe two to five years, UAVs are not going to be sending any signals,” says Oleksandr Barabash, CTO of Falcons, a Ukrainian startup that has developed a smart radio-frequency detection system capable of revealing precise locations of enemy radio sources such as drones, control stations, and jammers.Last September, Falcons secured funding from the U.S.-based dual-use tech fund Green Flag Ventures to scale production of its technology and work toward NATO certification. But Barabash admits that its system, like all technologies fielded in Ukrainian war zones, has an expiration date. Instead of radio-frequency detectors, Barabash thinks, the next R&D push needs to focus on passive radar systems capable of identifying small and fast-moving targets based on the signal from sources like TV towers or radio transmitters that propagate through the environment and are reflected by those moving targets. Passive radars have a significant advantage in the war zone, according to Barabash. Since they don’t emit their own signal, they can’t be that easily discovered by the enemy.“Active radar is emitting signals, so if you are using active radars, you are target No. 1 on the front line,” Barabash says.Bondar, on the other hand, thinks that the increased onboard compute power needed for AI-controlled drones will, by itself, generate enough electromagnetic radiation to prevent autonomous drones from ever operating completely undetectably.“You can have full autonomy, but you will still have systems onboard that emit electromagnetic radiation or heat that can be detected,” says Bondar. “Batteries emit electromagnetic radiation, motors emit heat, and [that heat can be] visible in infrared from far away. You just need to have the right sensors to be able to identify it in advance.” She adds that that takeaway is “how capable contemporary detection systems have become and how technically challenging it is to design drones that can reliably operate in the Ukrainian battlefield environment.”There Will Be Nowhere to Hide from Autonomous DronesWhen autonomous drones become a standard weapon of war, their threat will extend far beyond the battlefields of Ukraine. Autonomous turrets and drone-interceptor fortification might soon dot the perimeter of European cities, particularly in the eastern part of the continent. A fixed-wing drone is tested in Ukraine in April 2025.ANDREWKRAVCHENKO/BLOOMBERG/GETTY IMAGESNefarious actors from all over the world have closely watched Ukraine and taken notes, warns Lange. Today, FPV drones are being used by Islamic terrorists in Africa and Mexican drug cartels to fight against local authorities.When autonomous killing machines become widely available, it’s likely that no city will be safe. “We might see nets above city centers, protecting civilian streets,” Lange says. “In every case, the West needs to start performing similar kinetic-defense development that we see in Ukraine. Very rapid iteration and testing cycles to find solutions.”Azhnyuk is concerned that the historic defenders of Europe—the United States and the European countries themselves—are falling behind. “We are in danger,” he says. While Russia and Ukraine made major strides in their drones and countermeasures over the past year, “Europe and the United States have progressed, in the best-case scenario, from the winter-of-2022 technology to the summer-of-2022 technology.“The gap is getting wider,” he warns. “I think the next few years are very dangerous for the security of Europe.” This article appears in the April 2026 print issue as “Rise of the AUTONOMOUS Attack Drones.”

24.03.2026 13:00:05

Technologie a věda
3 dny

Mel OlkenFormer executive director of the IEEE Power & Energy SocietyFellow, 92; died 9 JanuaryOlken became the first executive director of the IEEE Power & Energy Society (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s Power & Energy Magazine. Olken led the publication until 2016, when he retired.After receiving a bachelor’s degree in engineering from the City College of New York, Olken was hired as an electrical engineer by American Electric Power, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and nuclear power plants. While at AEP, he was promoted to manager of the electrical generation department.He joined IEEE in 1958 and became a PES member in 1973. An active volunteer, he chaired the society’s energy development and power generation committee and its technical council.Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”He became an IEEE staff member in 1984 as society services director for IEEE Technical Activities. From 1990 to 1995 he served as managing director of Regional Activities group (now IEEE Member and Geographic Activities), before becoming PES executive director.He received a PES Lifetime Achievement Award in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”Stephanie A. HugueninResearch scientistIEEE member, 48; died 1 OctoberHuguenin was an administrative assistant in the physics and biophysics department at Augusta University, in Georgia. According to her Augusta obituary, she died of an illness acquired during her volunteer work in India.She received a bachelor’s degree in engineering in 1999 from the College of Charleston, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the Jenkins Institute for Children), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the Mother Teresa Foundation.Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer Ebet Roberts in New York City. In 2010 she left to work as an operations strategist and technical consultant.She earned a master’s degree in communication and research science in 2016 from New York University. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.She was a member of the IEEE Geoscience and Remote Sensing Society and the IEEE Systems Council.Huguenin volunteered for the Internet Engineering Task Force, a standards development organization, and the American Registry for Internet Numbers. ARIN manages and distributes internet number resources such as IP addresses and autonomous system numbers.The nonprofits she supported included the Coastal Conservation League, the Longleaf Alliance, the Lowcountry Land Trust, the Nature Conservancy, and Women in Defense.

23.03.2026 18:00:05

Technologie a věda
3 dny

This is a sponsored article brought to you by PNY Technologies.In today’s data-driven world, data scientists face mounting challenges in preparing, scaling, and processing massive datasets. Traditional CPU-based systems are no longer sufficient to meet the demands of modern AI and analytics workflows. NVIDIA RTX PROTM 6000 Blackwell Workstation Edition offers a transformative solution, delivering accelerated computing performance and seamless integration into enterprise environments.Key Challenges for Data ScienceData Preparation: Data preparation is a complex, time-consuming process that takes most of a data scientist’s time.Scaling: Volume of data is growing at a rapid pace. Data scientists may resort to downsampling datasets to make large datasets more manageable, leading to suboptimal results.Hardware: Demand for accelerated AI hardware for data centers and cloud service providers (CSPs) is exceeding supply. Current desktop computing resources may not be suitable for data science workflows.Benefits of RTX PRO-Powered AI WorkstationsNVIDIA RTX PRO 6000 Blackwell Workstation Edition delivers ultimate acceleration for data science and AI workflows. These powerful and robust workstations enable real-time rendering, rapid prototyping, and seamless collaboration. With support for up to four NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPUs, users can achieve data center-level performance right at their desk, making even the most demanding tasks manageable. PNY is redefining professional computing with the ‪@NVIDIA‬ RTX PRO 6000 Blackwell Workstation Edition, the most powerful desktop GPU ever built. Engineered for unmatched compute power, massive memory capacity, and breakthrough performance, this cutting-edge solution delivers a quantum leap forward in workflow efficiency, enabling professionals to tackle the most demanding applications with ease.PNYNVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to handle massive datasets, perform advanced visualizations, and support multi-user environments without compromise. It’s ideal for organizations scaling up their analytics or running complex models. NVIDIA RTX PRO 6000 Blackwell Workstation Edition is optimized for AI workflows, leveraging the NVIDIA AI software stack, including CUDA-X, and NVIDIA Enterprise software. These platforms enable zero-code-change acceleration for Python-based workflows and support over 100 AI-powered applications, streamlining everything from data preparation to model deployment.Finally, NVIDIA RTX PRO 6000 Blackwell Workstation Edition offers significant advantages in security and cost control. By offloading compute from the data center and reducing reliance on cloud resources, organizations can lower expenses and keep sensitive data on-premises for enhanced protection.Accelerate Every Step of Your WorkflowNVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment. With NVIDIA CUDA-X open-source data science cuDF library and other GPU-accelerated libraries, data scientists can process massive datasets at lightning speed, often achieving up to 50X faster performance compared to traditional CPU-based tools. This means tasks like cleaning data, managing missing values, and engineering features can be completed in seconds, not hours, allowing teams to focus on extracting insights and building better models.NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deploymentExploratory data analysis is elevated with advanced analytics and interactive visualizations, powered by NVIDIA CUDA-X and PyData libraries. These tools enable users to create expansive, responsive visualizations that enhance understanding and support critical decision-making. When it comes to model training, GPU-accelerated XGBoost slashes training times from weeks to minutes, enabling rapid iteration and faster time-to-market AI solutions.NVIDIA RTX PRO 6000 Blackwell Workstation Edition streamlines collaboration and scalability. With NVIDIA AI Workbench, teams can set up projects, develop, and collaborate seamlessly across desktops, cloud platforms, and data centers. The unified software stack ensures compatibility and robustness, while enterprise-grade hardware maximizes uptime and reliability for demanding workflows.By integrating these advanced capabilities, NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to overcome bottlenecks, boost productivity, and drive innovation, making them an essential foundation for modern, enterprise-ready AI development.Performance BenchmarksNVIDIA’s cuDF library offers zero-code change acceleration for pandas, delivering up to 50X performance gains. For example, a join operation that takes nearly 5 minutes on CPU completes in just 14 seconds on GPU. Advanced group by operations drop from almost 4 minutes to just 4 seconds.Enterprise-Ready Solutions from PNY Available from leading OEM manufacturers, NVIDIA RTX PRO 6000 Blackwell Workstation Edition Series GPUs are specifically engineered to meet the rigorous demands of enterprise environments. These systems incorporate NVIDIA Connect-X networking, now available at PNY and a comprehensive suite of deployment and support tools, ensuring seamless integration with existing IT infrastructure.Designed for scalability, the latest generation of workstations can tackle complex AI development workflows at scale for training, development, or inferencing. Enterprise-grade hardware maximizes uptime and reliability.To learn more about NVIDIA RTX PRO™ Blackwell solutions, visit: NVIDIA RTX PRO Blackwell | PNY Pro | pny.com or email GOPNY@PNY.COM

23.03.2026 13:00:04

Technologie a věda
4 dny

Most people who regularly use AI tools would say they’re making their lives easier. The technology promises to streamline and take over tasks both professionally and personally—whether that’s summarizing documents, drafting deliverables, generating code, or even offering emotional support. But researchers are concerned AI is making some tasks too easy, and that this will come with unexpected costs.In a commentary titled Against Frictionless AI, published in Communications Psychology on 24 February, psychologists from the University of Toronto discuss what might be lost when AI removes too much effort from human activities. Their argument centers on the idea that friction—difficulty, struggle, and even discomfort—plays an important role in learning, motivation, and meaning. Psychological research has long shown that effortful engagement can deepen understanding and strengthen memory, sometimes described as “desirable difficulties.” The authors worry that AI systems capable of instantly producing polished answers or highly responsive conversation may bypass these processes of learning and motivation. By prioritizing outcomes over effort, AI could weaken the experiences that help people develop skills, build relationships, and find meaning in their work.IEEE Spectrum spoke with the paper’s lead author, Emily Zohar, an experimental psychology Ph.D. student, about why she and her coauthors (psychologists Paul Bloom and Michael Inzlicht) argue that friction matters—and what a more human-centered approach to AI design could look like.When you say “friction,” what do you mean, from both a cognitive and an interpersonal standpoint?Zohar: We define friction as any difficulty encountered during goal pursuit. In the context of work, it involves mental effort—rumination and persistence, staying on a problem for some time, and this helps solidify the idea and the creative process.In relationships, friction involves disagreement, compromise, misunderstanding, a back and forth that is natural where you don’t always see eye to eye, and it helps you broaden your horizons. Even the feeling of loneliness is important. It motivates you to find social interactions. So having these negative feelings and difficulty is important in the social context.Given that definition, what do you mean by “frictionless” AI?Zohar: Frictionless AI refers to the excessive removal of effort from cognitive and social tasks. With AI, as we typically use it, it’s really easy to go from ideation right to the end product. You ask AI to solve something with one prompt, and it completes the whole thing. This is a problem because it takes away the intermediate steps that really drive motivation and learning, and it prioritizes outcome over process. Rather than working through the steps, AI does that meaningful work for you.There’s a lot of research showing work products are better with AI. That makes sense, it has all this knowledge, but it does worry us as it may be eroding something essential that will have long-term consequences. If you’re faced with the same problem and AI is removed, you don’t have the required knowledge to know how to face the problem next time.You argue that removing friction can harm learning and relationships. What role do effort and struggle play in human development?Zohar: In learning, the term is “desirable difficulties.” It’s the idea of effort and work, not just any effort but manageable effort. Facing problems that you can overcome, but you have to work at them a bit, that’s the key idea of friction. We don’t want you to face insurmountable problems. We want you to work hard, but still be able to overcome it. This helps you really digest information and learn from it.In interpersonal relationships, you have to face some difficulties to see other perspectives and learn from them, and learn to be accepting of others. If you’re used to an AI reinforcing all your ideas and being sycophantic, you’ll come into the real world and you won’t be used to seeing other ideas. You won’t know how to interact socially because you’ll expect people to always be on your side and agree with you. You won’t learn that life doesn’t always go exactly how you expect it to, and conversations don’t always go the way you want them to.AI’s Impact on Creative ProcessesA lot of technologies have historically aimed to reduce effort: calculators, washing machines, spell-check. What’s different about AI?Zohar: Past technologies have mostly focused on reducing physical effort. We don’t have to go down to the lake to wash our laundry anymore. [Past technologies] took away the mundane tasks that weren’t driving our learning and growth, they were just adding unneeded obstacles and taking away time from more important tasks.But AI is taking away effort from creative and cognitive processes that drive meaning, motivation, and learning. That’s a key difference, because it’s not taking away friction from tasks that don’t serve us. It’s taking away friction from experiences that are really important and integral to our development.Are there contexts where AI is already removing beneficial friction? How might the impacts of reduced friction show up over time?Zohar: One clear example is writing. People increasingly rely on AI to draft everything from emails to essays, removing many instances of beneficial friction. Research shows that people trust responses less when they learn they were written by AI, judge AI-generated products as less creative and less valuable, and have greater difficulty remembering their own work products when they were produced with AI assistance. Outsourcing writing to AI strips away both social and cognitive friction.Vibe coding is another good example. If you’re a programmer, coding is integral to what drives your meaning. People get meaning out of their work, and if you’re substituting that with AI, it could be detrimental. The negative impact of frictionless AI is that it takes away friction from things that are really important to who you are as a person, and your skills.One area I worry about a lot is adolescents using AI in general. It’s a really important developmental period to learn and grow and find the path you’ll follow. So if you don’t have these effortful interactions with work and relationships that teach you how to think, this will have long-term detrimental impacts. They might not be able to think critically in the same way, because they never had to before. If they’re turning to AI for social relationships at such a young age, that could really erode important skills they should be learning at that age.What is productive friction?Zohar: Friction goes along a continuum. With too little friction, you’re not getting learning and motivation. Too much friction and the task becomes overwhelming. Productive friction falls right in the middle, where struggle leads to achievement. It’s effortful but possible, and it requires you to think critically and work on a problem for some time or face some difficulty in the process.An example we used in the paper is the difference between taking a chairlift and hiking up a mountain. They both get to the top, but with the chairlift, you don’t get any growth benefits, while the hiker’s climb involves difficulties and a sense of achievement. It becomes much more of an experience and a learning opportunity versus the person who just went up the chairlift effortlessly.Do you envision AI that sometimes deliberately slows people down or asks them to do part of the work themselves?Zohar: It’s important in behavioral science to think about the default option, because people don’t usually change their default. So right now, the default in AI is to give you your answer and probe you to keep going down the rabbit hole. But I think we could think about AI in a different way. Maybe we can make the default more constructive. Instead of just jumping to the answer, it’s more of a process model where it helps you think about the problem and teaches you along the way, so it’s more collaborative rather than a one-stop shop for the answer.How might users of these systems and the companies developing them feel about such a design shift?Zohar: For the makers of these systems, the biggest concern is the pushback. People are used to going in and just getting the answer, and they might be really resistant to a design that makes them work more for it. But it might feed more engagement, because you have to go back and forth and find the answer together.Ultimately I think it has to come from the companies making these models, if they think [a more friction-full design] would help people. Friction-full AI is more of a long-term product. It’s hard to say if that would motivate companies to change their models to include moderate friction. But in the long term, I think this would be beneficial.

22.03.2026 13:00:04

Technologie a věda
5 dní

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNASummer School on Multi-Robot Systems: 29 July–4 August 2026, PRAGUEEnjoy today’s videos! Human athletes demonstrate versatile and highly dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference. In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa.[ LATENT ]A beautifully designed robot inspired by Strandbeests.[ Cranfield University ]We believe we’re the first robotics company to demonstrate a robot peeling an apple with dual dexterous humanlike hands. This breakthrough closes a key gap in robotics, achieving bimanual, contact-rich manipulation and moving far beyond the limits of simple grippers.Today’s AI models (VLMs) are excellent at perception but struggle with action. Controlling high-degree-of-freedom hands for tasks like this is incredibly complex, and precise finger-level teleoperation is nearly impossible for humans. Our first step was a shared-autonomy system: rather than controlling every finger, the operator triggers prelearned skills like a “rotate apple or tennis ball” primitive via a keyboard press or pedal. This makes scalable data collection and RL training possible.How does the AI manage this? We created “MoDE-VLA” (Mixture of Dexterous Experts). It fuses vision, language, force, and touch data by using a team of specialist “experts,” making control in high-dimensional spaces stable and effective. The combination of these two innovations allows for seamless, contact-rich manipulation. The human provides high-level guidance, and the robot executes the complex in-hand coordination required.[ Sharpa ]Thanks, Alex!It was great to see our name amongst the other “AI Native” companies during the NVIDIA GTC keynote. NVIDIA Isaac Lab helps us train reinforcement learning policies that enable the UMV to drive, jump, flip, and hop like a pro.[ Robotics and AI Institute ]This Finger-Tip Changer technology was jointly researched and developed through a collaboration between Tesollo and RoCogMan LaB at Hanyang University ERICA. The project integrates Tesollo’s practical robotic hand development experience with the lab’s expertise in robotic manipulation and gripper design.I don’t know why more robots don’t do this. Also, those pointy fingertips are terrifying.[ RoCogMan LaB ]Here’s an upcoming ICRA paper from the Fluent Robotics Lab at the University of Michigan featuring an operational PR2! With functional batteries!!![ Fluent Robotics Lab ]This video showcases the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the DRCD Lab featuring in-house actuators. The control policy was trained through deep reinforcement learning leveraging human demonstrations.[ KAIST DRCD Lab ]This needs to come in adult size.[ Deep Robotics ]I did not know this, but apparently shoeboxes are really annoying to manipulate because if you grab them by the lid, they just open, so specialized hardware is required.[ Nomagic ]Thanks, Gilmarie!This paper presents a method to recover quadrotor Unmanned Air Vehicles (UAVs) from a throw, when no control parameters are known before the throw.[ MAVLab ]Uh-oh, robots can see glass doors now. We’re in trouble.[ LimX Dynamics ]This drone hugs trees

21.03.2026 16:30:04

Technologie a věda
6 dní

Wheelchair users with severe disabilities can often navigate tight spaces better than most robotic systems can. A wave of new smart-wheelchair research, including findings presented in Anaheim, Calif., earlier this month, is now testing whether AI-powered systems can, or should, fully close this gap.Christian Mandel—senior researcher at the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany—co-led a research team together with his colleague Serge Autexier that developed prototype sensor-equipped electric wheelchairs designed to navigate a roomful of potential obstacles. The researchers also tested a new safety system that integrated sensor data from the wheelchair and from sensors in the room, including from drone-based color and depth cameras.Mandel says the team’s smart wheelchairs were both semiautonomous and autonomous.“Semiautonomous is the shared control system where the person sitting in the wheelchair uses the joystick to drive,” Mandel says. “Fully autonomous is controlled by natural-language input. You say, ‘Please drive me to the coffee machine.’ ” This is a close-up of the wheelchair’s joystick and camera.DFKIThe researchers conducted experiments (part of a larger project called the Reliable and Explainable Swarm Intelligence for People With Reduced Mobility, or REXASI-PRO) using two identical smart wheelchairs that each contained two lidars, a 3D camera, odometers, user interfaces, and an embedded computer.In contrast to semiautonomous mode, where the participant controls the wheelchair with a joystick, in autonomous mode, control involves the open-source ROS2 Nav2 navigation system using natural-language input. The wheelchairs also used simultaneous localization and mapping (SLAM) maps and local obstacle-avoidance motion controllers.One scenario that Mandel and his team tested involved the user pressing a key on the wheelchair’s human-machine interface, speaking a command, then confirming or rejecting the instruction via that same interface. Once the user confirmed the command, the mobility device guided the user along a path to the destination, while sensors attempted to detect obstacles in the way and adjust the mobility device accordingly to avoid them.When Are Smart Wheelchairs Bad Value?According to Pooja Viswanathan, CEO & founder of the Toronto-based Braze Mobility, research in the field of mobile assistive technology should also prioritize keeping these devices readily available to everyday consumers.“Cost remains a major barrier,” she says. “Funding systems are often not designed to support advanced add-on intelligence unless there is very clear evidence of value and safety. Reliability is another barrier. A smart wheelchair has to work not just in ideal conditions, but in the messy, variable conditions of daily life. And there is also the human factors dimension. Users have different cognitive, motor, sensory, and environmental needs, so one solution rarely fits all.”For its part, Braze makes blind-spot sensors for electric wheelchairs. The sensors detect obstacles in areas that can be difficult for a user to see. The sensors can also be added to any wheelchair to transform it into a smart wheelchair by providing multimodal alerts to the user. This approach attempts to support users rather than replace them.According to Louise Devinge, a biomedical research engineer from IRISA (Research Institute of Computer Science and Random Systems) in Rennes, France, the increased complexity of smart wheelchairs demands more sensing. And that requires careful management of communication and synchronization within the wheelchair’s system. “The more sensing, computation, and autonomy you add,” she says, “the harder it becomes to ensure robust performance across the full range of real-world environments that wheelchair users encounter.”In the near term, in other words, the field’s biggest challenge is not about replacing the wheelchair user with AI smarts but rather about designing better partnerships between the user and the technology. This image shows data representations used by the 3D Driving Assistant. These include immutable sensor percepts such as laser scans and point clouds, as well as derived representations like the virtual laser scans and grid maps. Finally, the robot shape collection describes the wheelchair’s physical borders at different heights.DFKIWhere Will Smart Wheelchairs Go From Here?Mandel says he expects to see smart wheelchairs ready for the mainstream marketplace within 10 years.Viswanathan says the REXASI-PRO system, while out of reach of present-day smart wheelchair technologies, is important for the longer term. “It reflects the more ambitious end of the smart wheelchair spectrum,” she says. “Its strengths appear to lie in intelligent navigation, advanced sensing, and the broader effort to build a wheelchair that can interpret and respond to complex environments in a more autonomous way. From a research standpoint, that is exactly the kind of work that pushes the field forward. It also appears to take seriously the importance of trustworthy and explainable AI, which is essential in any mobility technology where safety, reliability, and user confidence are paramount.”Mandel says he’s ultimately in pursuit of the inspiration that got him into this field years ago. As a young researcher, he says, he helped develop a smart wheelchair system controllable with a head joystick.However, Mandel says he realized after many trials that the smart wheelchair system he was working on had a long way to go because, as he says, “at that point in time, I realized that even persons that had severe handicaps [traveling through] a narrow passage, they did very, very well.“And then I realized, okay, there is this need for this technology, but never underestimate what [wheelchair users] can do without it.”The DFKI researchers presented their work earlier this month at the CSUN Assistive Technology Conference in Anaheim, Calif.This article was supported by the IEEE Foundation and a Jon C. Taenzer fellowship grant.

20.03.2026 18:49:12

Technologie a věda
6 dní

The rapid ascent of artificial intelligence and semiconductor manufacturing has created a paradox: Industries are booming yet they face a critical shortage of skilled workers. Demand for data center technicians, fabrication facility workers, and similar positions is growing. There aren’t enough candidates with the right skill sets to fill the in-demand jobs.Although those technical roles are essential, they don’t always require a four-year degree—which has paved the way for skills-based microcredentials. By partnering with higher education institutions and training providers, industry leaders are helping to design targeted skills programs that quickly turn learners into job-ready technical professionals.The new standard for skills validationBecause microcredentials are relatively new, consistency is key. Through its credentialing program, IEEE serves as a bridge between academia and industry. Developed and managed by IEEE Educational Activities, the program offers standardized credentials in collaboration with training organizations and universities seeking to provide skills-based qualifications outside formal degree programs. IEEE, as the world’s largest technical professional organization, has more than 30 years of experience offering industry-relevant credentials and expertise in global standardization.IEEE is setting the benchmark for skills-based microcredentials by establishing a framework that includes assessment methods, qualifications for instructors and assessors, and criteria for skill levels.A recent collaboration with the University of Southern California, in Los Angeles, for example, developed microcredentials for USC’s semiconductor cleanroom program. USC heads the CA Dreams microelectronics innovation hub.“The IEEE framework allows us to rapidly prototype training programs and adapt on the fly in a way that building new university courses—much less degree programs—won’t allow.” —Adam StiegIEEE worked with USC to create standardized skills assessments and associated microcredentials so that industry hiring managers can recognize the newly developed skills. The microcredentials help people with or without four-year degrees join the semiconductor industry as cleanroom technicians or as engineers with cleanroom experience.IEEE also has partnered with the California NanoSystems Institute at the University of California, Los Angeles, to create skills-based microcredentials for its cleanroom protocol and safety program.Best practices for designing microcredentialsBased on IEEE’s work designing microcredentials with USC, UCLA, and other leading academic institutions, three best practices have emerged.1. Align with industry needs before design.Collaborate with industry prior to starting the design process. There isn’t a one-size-fits-all approach. Workforce needs vary based on industry sector, company size, and geography. Higher education institutions and training providers build relationships with companies and industry groups to create effective microcredential programs and methods of assessment.2. Build for flexibility.Traditional academic cycles can be slow, but technology moves fast. A flexible skills-based microcredentials framework allows programs to create or pivot as new breakthroughs occur.“Setting up a credit-bearing course is not easy. And in a rapidly changing environment, you need to pivot quickly,” says Adam Stieg, research scientist and associate director at UCLA’s CNSI. “IEEE skills-based microcredentials are a flexible way to keep up our curriculum aligned with an evolving technology landscape.”Stieg’s team worked with IEEE to build a framework to create microcredentials for its cleanroom protocol and safety program, ensuring it kept pace with the industry’s evolution.“The IEEE framework allows us to rapidly prototype training programs and adapt on the fly,” he says, “in a way that building new university courses—much less degree programs—won’t allow.”3. Implement a continuous-feedback loop.Many of the technical roles companies are looking to fill in emerging fields such as AI, cybersecurity, and semiconductors are still being developed or are quickly evolving. The rapidly changing landscape requires continual communications and feedback among higher education, training providers, and industry.“We struggle to have feedback loops through the education system to the industry and back again,” says Matt Francis, president and CEO of Ozark Integrated Circuits, in Fayetteville, Ark. Francis, who has served as IEEE Region 5 director, is an IEEE volunteer who supports workforce development for the semiconductor industry.Creating consistent feedback loops is critical for generating consensus on the skills sets needed for microcredential programs, experts say, and it allows providers to update assessments as new tools and safety protocols enter the workplace.“If we start thinking about having training frameworks used within companies that are essentially on some sort of standard and align with a microcredential, we can start to build consensus,” Francis says.Getting startedThrough its credentialing program, IEEE is helping higher education and industry work together to bridge the technical workforce skills gap. Contact its team to learn how IEEE skills-based microcredentials can help you fill your workforce pipeline.

20.03.2026 18:00:04

Technologie a věda
7 dní

A growing number of Nigerian companies are turning to kit-based assembly to bring electric vehicles to market in Africa. Lagos-based Saglev Micromobility Nigeria recently partnered with Dongfeng Motor Corp., in Wuhan, China, to assemble 18-seat electric passenger vans from imported kits.Kit-based assembly allows Nigerian firms to reduce costs, create jobs, and develop local technical expertise—key steps toward expanding EV access. Fully assembled and imported EVs face high tariffs that put them out of reach for many African consumers, whereas kit-based approaches make electric mobility more affordable today. Saglev’s initiative reflects a broader trend: CIG Motors, NEV Electric, and regional players in Côte D’Ivoire, Ghana, and Kenya are also leveraging imported kits to build local EV ecosystems, signaling that parts of West Africa are intent on catching up with global electrification efforts.Expanding the Local EV EcosystemCIG Motors operates a kit-assembly plant in Lagos producing vehicles from Chinese automakers GAC Motor and Wuling Motors. These vehicles include the Wuling Bingo, a compact five-door electric hatchback, and the Hongguang Mini EV Macaron, a microcar with roughly 200 kilometers of range aimed at ride-share operators looking for ultralow-cost urban transport. NEV Electric focuses on electric buses and three-wheelers for urban transit and last-mile delivery.Saglev’s CEO, Olu Faleye, emphasizes that Nigeria’s EV transition addresses both practical economic needs in addition to environmental goals. Beyond passenger transport, electric vehicles could help reduce one of Nigeria’s persistent agricultural challenges: postharvest spoilage. Nigeria loses an estimated 30 million to 40 million tonnes of food annually because of weak logistics and limited refrigeration infrastructure, according to the Organization for Technology Advancement of Cold Chain in West Africa.Electric vans, minitrucks, and three-wheel cargo vehicles could help close this gap because their batteries can power refrigeration systems during transport without relying on costly diesel fuel. As EV adoption grows and charging infrastructure expands, temperature-controlled transport could become more affordable, reducing spoilage, improving farmer incomes, and helping stabilize food supplies, the organization says.“I don’t believe that the promised land is making a fully built EV on the ground here.” –Olu Faleye, Saglev CEOBeyond Nigeria, Mombasa, Kenya–based Associated Vehicle Assemblers has begun making electric taxis and minibuses from imported kits, and Ghana’s government is spurring kit-car assembly there under its national Automotive Development Plan. In Ghana, assemblers benefit from import-duty exemptions on kits and equipment, corporate tax breaks, and access to industrial infrastructure. Saglev is already availing itself of those benefits, at its kit-assembly plant in Accra, Ghana. The company says it also plans to expand its assembly operations to Côte D’Ivoire.Infrastructure Challenges and WorkaroundsDespite these signs that West Africa’s EV ecosystem is gaining traction, limited grid reliability and sparse public charging infrastructure remain major barriers to widespread EV adoption. Urban households in Nigeria experience roughly six or seven blackouts per week, each lasting about 12 hours, according to Nigeria’s National Bureau of Statistics. That’s more downtime each day than the average U.S. household experiences in a year. More than 40 percent of households rely on generators, which supply about 44 percent of residential electricity, according to research by Stears and Sterling Bank.Many early EV adopters therefore charge vehicles using gasoline or diesel generators. Faleye notes that Nigerians have long relied on such workarounds and expects fossil fuels to remain part of the EV charging equation for the foreseeable future—at least until falling costs for solar panels and battery storage make cleaner charging viable.He acknowledges that charging EVs using hydrocarbons is fraught from an environmental perspective, but he points out that the practice at least brings other benefits of EVs, including lower maintenance costs and the EVs’ synergies with refrigeration and transportation logistics. And he points to a 2020 peer-reviewed study in the journal Environmental and Climate Technologies that compared the overall efficiency of internal combustion vehicles and electric vehicles across the full well-to-wheel energy chain. The study’s conclusion: Even after accounting for conversion losses, generating electricity with a diesel or gasoline generator to power an electric vehicle can remain just as efficient overall as burning the same fuel directly in a vehicle’s internal combustion engine. Workers at Saglev’s Lagos, Nigeria, EV assembly plant put the finishing touches on partially assembled vehicle kits imported from China. Saglev Scalable EV Adoption in NigeriaThe approach taken by Saglev and other Nigerian kit-car builders shows how local assembly can advance EV adoption even where infrastructure remains unreliable. By starting with kits, companies can deploy practical electric mobility solutions now while building the supply chains and technical expertise needed for more resource-intensive localized production.Still, when asked whether Saglev plans to eventually move beyond kit assembly to independent design and manufacturing of EVs, Faleye calls such a move impractical.“I don’t believe that the promised land is making a fully built EV on the ground here,” he says. “For me to do efficient vehicle manufacturing, I’d need a lot of robotics and 3D printing. That expense is unnecessary—it would just increase costs and make EVs more expensive.”In a country where electricity can disappear for days, Nigeria’s kit-based EV strategy highlights a practical truth: Incremental progress and ingenuity may matter more than perfect infrastructure. For Saglev, every kit-based vehicle rolling off the line is not just a van or bus—it’s a step toward an EV ecosystem that works for Nigeria’s realities today.

19.03.2026 14:45:05

Technologie a věda
7 dní

One morning in May 2019, a cardiac surgeon stepped into the operating room at Boston Children’s Hospital more prepared than ever before to perform a high-risk procedure to rebuild a child’s heart. The surgeon was experienced, but he had an additional advantage: He had already performed the procedure on this child dozens of times—virtually. He knew exactly what to do before the first cut was made. Even more important, he knew which strategies would provide the best possible outcome for the child whose life was in his hands.How was this possible? Over the prior weeks, the hospital’s surgical and cardio-engineering teams had come together to build a fully functioning model of the child’s heart and surrounding vascular system from MRI and CT scans. They began by carefully converting the medical imaging into a 3D model, then used physics to bring the 3D heart to life, creating a dynamic digital replica of the patient’s physiology. The mock-up reproduced this particular heart’s unique behavior, including details of blood flow, pressure differentials, and muscle-tissue stresses.This type of model, known as a virtual twin, can do more than identify medical problems—it can provide detailed diagnostic insights. In Boston, the team used the model to predict how the child’s heart would respond to any cut or stitch, allowing the surgeon to test many strategies to find the best one for this patient’s exact anatomy.That day, the stakes were high. With the patient’s unique condition—a heart defect in which large holes between the atria and ventricles were causing blood to flow between all four chambers—there was no manual or textbook to fully guide the doctors. The condition strains the lungs, so the doctors planned an open-heart surgery to reroute deoxygenated blood from the lower body directly to the lungs, bypassing the heart. Typically with this kind of surgery, decisions would be made on the fly, under demanding conditions, and with high uncertainty. But in this case, the plan had been tested in advance, and the entire team had rehearsed it before the first incision. The surgery was a complete success.Such procedures have become routine at the Boston hospital. Since that first patient, nearly 2,000 procedures have been guided by virtual-twin modeling. This is the power of the technology behind the Living Heart Project, which I launched in 2014, five years before that first procedure. The project started as an exploratory initiative to see if modeling the human heart was possible. Now with more than 150 member organizations across 28 countries, the project includes dozens of multidisciplinary teams that regularly use multiscale virtual twins of the heart and other vital organs.This technology is reshaping how we understand and treat the human body. To reach this transformative moment, we had to solve a fundamental challenge: building a digital heart accurate enough—and trustworthy enough—to guide real clinical decisions.A father’s concernNow entering its second decade, the Living Heart Project was born in part from a personal conviction. For many years, I had watched helplessly as my daughter Jesse faced endless diagnostic uncertainty due to a rare congenital heart condition in which the position of the ventricles is reversed, threatening her life as she grew. As an engineer, I understood that the heart was an array of pumping chambers, controlled by an electrical signal and its blood flow carefully regulated by valves. Yet I struggled to grasp the unique structure and behavior of my daughter’s heart well enough to contribute meaningfully to her care. Her specialists knew the bleak forecast children like her faced if left untreated, but because every heart with her condition is anatomically unique, they had little more than their best guesses to guide their decisions about what to do and when to do it. With each specialist, a new guess.Then my engineering curiosity sparked a question that has guided my career ever since: Why can’t we simulate the human body the way we simulate a car or a plane? At a visualization center in Boston, VR imagery helps the mother of a young girl with a complex heart defect understand the inner workings of her child’s heart. Dassault SystèmesI had spent my career developing powerful computational tools to help engineers build digital models of complex mechanical systems, using models that ranged from the interactions of individual atoms to the components of entire vehicles. What most of these models had in common was the use of physics to predict behavior and optimize performance. But in medicine today, those same physics-based approaches rarely inform decision-making. In most clinical settings, treatment decisions still hinge on judgments drawn from static 2D images, statistical guidelines, and retrospective studies.This was not always the case. Historically, physics was central to medicine. The word “physician” itself traces back to the Latin physica, which translates to “natural science.” Early doctors were, in a sense, applied physicists. They understood the heart as a pump, the lungs as bellows, and the body as a dynamic system. To be a physician meant you were a master of physics as it applied to the human body.As medicine matured, biology and chemistry grew to dominate the field, and the knowledge of physics got left behind. But for patients like my daughter, that child in Boston, and millions like them, outcomes are governed by mechanics. No pill or ointment—no chemistry-based solution—would help, only physics. While I did not realize it at the time, virtual twins can reunite modern physicians with their roots, using engineering principles, simulation science, and artificial intelligence.A decade of progressThe LHP concept was simple: Could we combine what hundreds of experts across many specialties knew about the human heart to build a digital twin accurate enough to be trusted, flexible enough to personalize, and predictive enough to guide clinical care?We invited researchers, clinicians, device and drug companies, and government regulators to share their data, tools, and knowledge toward a common goal that would lift the entire field of medicine. The Living Heart Project launched with a dozen or so institutions on board. Within a year, we had created the first fully functional virtual twin of the human heart.The Living Heart was not an anatomical rendering, tuned to simply replicate what we observed. It was a first-principles model, coupling the network of fibers in the heart’s electrical system, the biological battery that keeps us alive, with the heart’s mechanical response, the muscle contractions that we know as the heartbeat. The Living Heart virtual twin simulates how the heart beats, offering different views to help scientists and doctors better predict how it will respond to disease or treatment. The center view shows the fine engineering mesh, the detailed framework that allows computers to model the heart’s motion. The image on the right uses colors to show the electrical wave that drives the heartbeat as it conducts through the muscle, and the image on the left shows how much strain is on the tissue as it stretches and squeezes. Dassault Systèmes Academic researchers had long explored computational models of the heart, but those projects were typically limited by the technology they had access to. Our version was built on industrial-grade simulation software from Dassault Systèmes, a company best known for modeling tools used in aerospace and automotive engineering, where I was working to develop the engineering simulation division. This platform gave teams the tools to personalize an individual heart model using the patient’s MRI and CT data, blood-pressure readings, and echocardiogram measurements, directly linking scans to simulations.Surgeons then began using the Living Heart to model procedures. Device makers used it to design and test implants. Pharmaceutical companies used it to evaluate drug effects such as toxicity. Hundreds of publications have emerged from the project, and because they all share the same foundation, the findings can be reproduced, reused, and built upon. With each application, the research community’s understanding of the heart snowballed.Early on, we also addressed an essential requirement for these innovations to make it to patients: regulatory acceptance. Within the project’s first year, the U.S Food and Drug Administration agreed to join the project as an observer. Over the next several years, methods for using virtual-heart models as scientific evidence began to take shape within regulatory research programs. In 2019, we formalized a second five-year collaboration with the FDA’s Center for Devices and Radiological Health with a specific goal.That goal was to use the heart model to create a virtual patient population and re-create a pivotal trial of a previously approved device for repairing the heart’s mitral valve. This helped our team learn how to create such a population, and let the FDA experiment with evaluating virtual evidence as a replacement for evidence from flesh-and-blood patients. In August 2024, we published the results, creating the first FDA-led guidelines for in silico clinical trials and establishing a new paradigm for streamlining and reducing risk in the entire clinical-trial process.In 10 years, we went from a concept that many people doubted could be achieved to regulatory reality. But building the heart was only the beginning. Following the template set by the heart team, we’ve expanded the project to develop virtual twins of other organs, including the lungs, liver, brain, eyes, and gut. Each corresponds to a different medical domain, which has its own community, data types, and clinical use cases. Working independently, these teams are progressing toward a breakthrough in our understanding of the human body: a multiscale, modular twin platform where each organ twin could plug into a unified virtual human.How a digital twin of the heart is constructedA cardiac digital twin starts with medical imaging, typically MRI, CT, or both. The slices are reconstructed into the 3D geometry of the heart and connected vessels. The geometry of the whole organ must then be segmented into its constituent parts, so each substructure—atria, ventricles, valves, and so on—can be assigned their unique properties.At this point, the object is converted to a functional, computational model that can represent how the various cardiac tissues deform under load—the mechanics. The complete digital twin model becomes “living” when we integrate the electrical fiber network that drives mechanical contractions in the muscle tissue. Each part of the heart, such as the left ventricle [left], is superimposed with a detailed digital mesh to re-create its physiology. These pieces come together to form an anatomically accurate rendering of the whole organ [right].Dassault SystèmesTo simulate circulation, the twin adds computational models of hemodynamics, the physics of blood flow and pressure. The model is constrained by boundary conditions of blood flow, valve behavior, and vascular resistance set to closely match human physiology. This lets the model predict blood flow patterns, pressure differentials, and tissue stresses.Finally, the model is personalized and calibrated using available patient data, such as how much the volume of the heart chambers changes during the cardiac cycle, pressure measurements, and the timing of electrical pulses. This means the twin reflects not only the patient’s anatomy but how their specific heart functions.Building bigger cohorts with generative AIWhen the FDA in silico clinical trial initiative launched in 2019, the project’s focus shifted from these handcrafted virtual twins of specific patients to cohorts large enough to stand in for entire trial populations. That scale is feasible today only because virtual twins have converged with generative AI. Modeling thousands of patients’ responses to a treatment or projecting years of disease progression is prohibitively slow with conventional digital-twin simulations. Generative AI removes that bottleneck.AI boosts the capability of virtual twins in two complementary ways. First, machine learning algorithms are unrivaled at integrating the patchwork of imaging, sensor, and clinical records needed to build a high-fidelity twin. The algorithms rapidly search thousands of model permutations, benchmark each against patient data, and converge on the most accurate representation. Workflows that once required months of manual tuning can now be completed in days, making it realistic to spin up population-scale cohorts or to personalize a single twin on the fly in the clinic.Second, enriching AI models’ training sets with data from validated virtual patients grounds the AI simulations in physics. By contrast, many conventional AI predictions for patient trajectories rely on statistical modeling trained on retrospective datasets. Such models can drift beyond physiological reality, but virtual twins anchor predictions in the laws of hemodynamics, electrophysiology, and tissue mechanics. This added rigor is indispensable for both research and clinical care—especially in areas where real-world data are scarce, whether because a disease is rare or because certain patient populations, such as children, are underrepresented in existing datasets.Enabling in silico clinical trialsOn the research side, the FDA-sponsored In Silico Clinical Trial Project that we completed in 2024 opened a new world for medical innovations. A conventional clinical trial may take a decade, and 90 percent of new drug treatments fail in the process. Virtual twins, combined with AI methods, allow researchers to design and test treatments quickly in a simulated human environment. With a small library of virtual twins, AI models can rapidly create expansive virtual patient cohorts to cover any subset of the general population. As clinical data becomes available, it can be added into the training set to increase reliability and enable better predictions. The Living Heart Project has expanded beyond the heart, modeling organs throughout the body. The 3D brain reconstruction [top] shows major pathways in the brain’s white matter connecting color-coded regions of the brain. The lung virtual twin [middle] combines the organ’s geometry with a physics-based simulation of air flowing down the trachea and into the bronchi. And the cross section of a patient’s foot [bottom] shows points of strain in the soft tissue when bearing weight. Dassault SystèmesVirtual twin cohorts can represent a realistic population by building individual “virtual patients” that vary by age, gender, race, weight, disease state, comorbidities, and lifestyle factors. These twins can be used as a rich training set for the AI model, which can expand the cohort from dozens to hundreds of thousands. Next the virtual cohort can be filtered to identify patients likely to respond to a treatment, increasing the chances of a successful trial for the target population.The trial design can also include a sampling of patient types less likely to respond or with elevated risk factors, thus allowing regulators and clinicians to understand the risks to the broader population without jeopardizing overall trial success. This methodology enhances precision and efficiency in clinical research, providing population-level insights previously available only after many years of real-world evidence.Of course, though today’s heart digital twins are powerful, they’re not perfect replicas. Their accuracy is bounded by three main factors: what we can measure (for example, image resolution or the uncertainty of how tissue behaves in real life), what we must assume about the physiology, and what we can validate against real outcomes. Many inputs, like scarring, microvascular function, or drug effects are difficult to capture clinically, so models often rely on population data or indirect estimation. That means predictions can be highly reliable for certain questions but remain less certain for others. Additionally, today’s digital twins lack validation for predicting long-term outcomes years in the future, because the technology has been in use for only a few years.Over time, each of these limitations will steadily shrink. Richer, more standardized data will tighten personalization of the models. AI tools will help automate labor-intensive steps. And the collection of longitudinal data will improve the model’s ability to reliably predict how the body will evolve over time.How virtual twins will change health careThroughout modern medicine, new technologies have sharpened our ability to diagnose, providing ever-clearer images, lab data, and analytics that tell physicians what is presently happening inside a patient’s body. Virtual twins shift that paradigm, giving clinicians a predictive tool. This “Living Lung” virtual-twin simulation shows strain patterns during breathing. Mona Eskandari/UC Riverside Early demonstrations are already appearing in many areas of medicine, including cardiology, orthopedics, and oncology. Soon, doctors will also be able to collaborate across specialties, using a patient-specific virtual twin as the common ground for discussing potential interactions or side effects they couldn’t predict independently.Although these applications will take some time to become the standard in clinical care, more changes are on the horizon. Real-time data from wearables, for example, could continuously update a patient’s personalized virtual twin. This approach could empower patients to understand and engage more deeply in their care, as they could see the direct effects of medical and lifestyle changes. In parallel, their doctors could get comprehensive data feeds, using virtual twins to monitor progress.Imagine a digital companion that shows how your particular heart will react to different amounts of salt intake, stress, or sleep deprivation. Or a visual explanation of how your upcoming surgery will affect your circulation or breathing. Virtual twins could demystify the body for patients, fostering trust and encouraging proactive health decisions.How are virtual twins being used in medicine?Virtual twins have guided cardiovascular surgeries, providing predictions and exposing hidden details that even expert clinicians might miss, such as subtle tissue responses and flow dynamics.Oncologists are modeling tumor growth and the body’s response to different therapies, reducing the uncertainty in choosing the best treatment path for both medical and quality-of-life metrics.Orthopedic specialists are personalizing implants to deliver custom-made solutions, considering not only the local environment but also the overall body kinematics that will govern long-term outcomes.A new era of healingWith the Living Heart Project, we’re bringing physics back to physicians. Modern physicians won’t need to be physicists, any more than they need to be chemists to use pharmacology. However, to benefit from the new technology, they will need to adapt their approach to care.This means no longer seeing the body as a collection of discrete organs and considering only symptoms, but instead viewing it as a dynamic system that can be understood, and in most cases, guided toward health. It means no longer guessing what might work but knowing—because the simulation has already shown the result. By better integrating engineering principles into medicine, we can redefine it as a field of precision, rooted in the unchanging laws of nature. The modern physician will be a true physicist of the body and an engineer of health.

19.03.2026 12:00:05

Technologie a věda
8 dní

Happy 80th anniversary, ENIAC! The Electronic Numerical Integrator and Computer, the first large-scale, general-purpose, programmable electronic digital computer, helped shape our world.On 15 February 1946, ENIAC—developed in the Moore School of Electrical Engineering at the University of Pennsylvania, in Philadelphia—was publicly demonstrated for the first time. Although primitive by today’s standards, ENIAC’s purely electronic design and programmability were breakthroughs in computing at the time. ENIAC made high-speed, general-purpose computing practicable and laid the foundation for today’s machines.On the eve of its unveiling, the U.S. Department of War issued a news release hailing it as a new machine “expected to revolutionize the mathematics of engineering and change many of our industrial design methods.” Without a doubt, electronic computers have transformed engineering and mathematics, as well as practically every other domain, including politics and spirituality.ENIAC’s success ushered the modern computing industry and laid the foundation for today’s digital economy. During the past eight decades, computing has grown from a niche scientific endeavor into an engine of economic growth, the backbone of billion-dollar enterprises, and a catalyst for global innovation. Computing has led to a chain of innovations and developments such as stored programs, semiconductor electronics, integrated circuits, networking, software, the Internet, and distributed large-scale systems.Inside the ENIACThe motivation for developing ENIAC was the need for faster computation during World War II. The U.S. military wanted to produce extensive artillery firing tables for field gunners to quickly determine settings for a specific weapon, a target, and conditions. Calculating the tables by hand took “human computers” several days, and the available mechanical machines were far too slow to meet the demand.80 Years of Electronic Computer Milestones 1946ENIAC operationalBirth of electronic computing1951UNIVAC IStart of commercial computing1958Integrated circuitFoundation for modern computer hardware1964IBM System/360Popular mainframe computer1970Programmed Data Processor (PDP-11)Popular 16-bit minicomputer1971Intel 4004Beginning of the microprocessor and microcomputer era1975Cray-1First supercomputer1977VAXPopular 32-bit minicomputer1981IBM PCPersonal and small-business computing1989World Wide WebDigital communication, interaction, and transaction (e-commerce)2002Amazon Web ServicesBeginning of the cloud computing revolution2010Apple iPadHandheld computer/tablet2010Industry 4.0Delivered real-time decision-making, smart manufacturing, and logistics2016First reprogrammable quantum computer demonstratedIgnited interest in quantum computing2023Generative AI boomWidespread use of GenAI by individuals, businesses, and academia2026ENIAC’s 80th anniversary80 years of computing evolutionIn 1942 John Mauchly, an associate professor of electrical engineering at Penn’s Moore School, suggested using vacuum tubes to speed up computer calculations. Following up on his theory, the U.S. Army Ballistic Research Laboratory, which was responsible for providing artillery settings to soldiers in the field, commissioned Mauchly and his colleagues J. Presper Eckert and Adele Katz Goldstine, to work on a new high-speed computer. Eckert was a lab instructor at Moore, and Goldstine became one of ENIAC’s programmers. It took them a year to design ENIAC and 18 months to build it.The computer contained about 18,000 vacuum tubes, which were cooled by 80 air blowers. More than 30 meters long, it filled a 9 m by 15 m room and weighed about 30 kilograms. It consumed as much electricity as a small town.Programming the machine was difficult. ENIAC did not have stored programs, so to reprogram the machine, operators manually reconfigured cables with switches and plugboards, a process that took several days.By the 1950s, large universities either had acquired or built their own machines to rival ENIAC. The schools included Cambridge (EDSAC), MIT (Whirlwind), and Princeton (IAS). Researchers used the computers to model physical phenomena, solve mathematical problems, and perform simulations.After almost nine years of operation, ENIAC officially was decommissioned on 2 October 1955.ENIAC in Action: Making and Remaking the Modern Computer, a book by Thomas Haigh, Mark Priestley, and Crispin Rope, describes the design, construction, and testing processes and dives into its afterlife use. The book also outlines the complex relationship between ENIAC and its designers, as well as the revolutionary approaches to computer architecture.In the early 1970s, there was a controversy over who invented the electronic computer and who would be assigned the patent. In 1973 Judge Earl Richard Larson of U.S. District Court in Minnesota ruled in the Honeywell v. Sperry Rand case that Eckert and Mauchly did not invent the automatic electronic digital computer but instead had derived their subject matter from a computer prototyped in 1939 by John Vincent Atanasoff and Clifford Berry at Iowa State College (now Iowa State University). The ruling granted Atanasoff legal recognition as the inventor of the first electronic digital computer.IEEE’s ENIAC MilestoneIn 1987 IEEE designated ENIAC as an IEEE Milestone, citing it as “a major advance in the history of computing” and saying the machine “established the practicality of large-scale electronic digital computers and strongly influenced the development of the modern, stored-program, general-purpose computer.”The commemorative Milestone plaque is displayed at the Moore School, by the entrance to the classroom where ENIAC was built.“The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.”A paper on the machine, published in 1996 in IEEE Annals of the History of Computing and available in the IEEE Xplore Digital Library, is a valuable source of technical information.“The Second Life of ENIAC,” an article published in the annals in 2006, covers a lesser-known chapter in the machine’s history, about how it evolved from a static system—configured and reconfigured through laborious cable plugging—into a precursor of today’s stored-program computers.A classic history paper on ENIAC was published in the December 1995 IEEE Technology and Society Magazine.The IEEE Inspiring Technology: 34 Breakthroughs book, published in 2023, features an ENIAC chapter.The women behind ENIACOne of the most remarkable aspects of the ENIAC story is the pivotal role women played, according to the book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer, highlighted in an article in The Institute. There were no “programmers” at that time; only schematics existed for the computer. Six women, known as the ENIAC 6, became the machine’s first programmers.The ENIAC 6 were Kathleen Antonelli, Jean Bartik, Betty Holberton, Marlyn Meltzer, Frances Spence, and Ruth Teitelbaum.“These six women found out what it took to run this computer, and they really did incredible things,” a Penn professor, Mitch Marcus, said in a 2006 PhillyVoice article. Marcus teaches in Penn’s computer and information science department.In 1997 all six female programmers were inducted into the Women in Technology International Hall of Fame, in Los Angeles.Two other women contributed to the programming. Goldstine wrote ENIAC’s five-volume manual, and Klára Dán von Neumann, wife of John von Neumann, helped train the programmers and debug and verify their code.To honor the women of ENIAC, the IEEE Computer Society established the annual Computer Pioneer Award in 1981. Eckert and Mauchly were among the award’s first recipients. In 2008 Bartik was honored with the award. Nominations are open to all professionals, regardless of gender.An ENIAC replicaLast year a group of 80 autistic students, ages 12 to 16, from PS Academy Arizona, in Gilbert, recreated the ENIAC using 22,000 custom parts. It took the students almost six months to assemble.A ceremony was held in January to display their creation. The full-scale replica features actual-size panels made from layered cardboard and wood. Although all electronic components are simulated, they are not electrically active. The machine, illuminated by hundreds of LEDs, is accompanied by a soundtrack that simulates the deep hum of ENIAC’s transformers and the rhythmic clicking of relays.“Every major unit, accumulators, function tables, initiator, and master programmer is present and placed exactly where it was on the original machine,” Tom Burick, the teacher who mentored the project, said at the ceremony.The replica, still on display at the school, is expected to be moved to a more permanent spot in the near future.ENIAC’s legacyENIAC’s significance is both technical and symbolic. Technically, it marks the beginning of the chain of innovations that created today’s computational infrastructure. Symbolically, it made governments, militaries, universities, and industry view computation as a tool for improvement and for innovative applications that had previously been impossible. It marked a tectonic shift in the way humans approach problem-solving, modeling, and scientific reasoning.The ENIAC legacy heralded the computer age, transforming not only science and industry but also education, research, and human communication and interaction.As Eckert is reported to have said, “There are two epochs in computer history: Before ENIAC and After ENIAC.”Coevolution of programming languagesThe remarkable evolution of computer hardware during the past 80 years has been sparked by advances in programming languages—the essential drivers of computing.From the manual rewiring of ENIAC to the orchestration of intelligent, distributed systems, programming languages have steadily evolved to make computers more powerful, expressive, and accessible.Lessons From Computing’s Remarkable JourneyComputing history teaches us that flexibility, accessibility, collaboration, sound governance, and forward thinking are essential for sustained technological progress. In a recent Communications of the ACM article, Richa Gupta identified four historic shifts that led to computing’s rapid, transformative progress:Programmable machines taught us that flexibility is key; technologies that adapt and are repurposed scale better.The Internet showed that connection and standard protocols drive explosive growth but also bring new risks such as data security issues, invasion of privacy, and misuse.Personal computers illustrated that accessibility and usability matter more than raw power. When nonexperts can use a tool easily, adoption rises.The open-source movement revealed that collaborative innovation accelerates growth and helps spot problems early.Predictions for computing in the decades aheadThe evolution of computing will continue along multiple trajectories, with the emphasis moving from generalization to specialization (for AI, graphics, security, and networking), from monolithic system design to modular integration, and from performance-centric metrics alone to energy efficiency and sustainability as primary objectives.Increasingly, security will be built into hardware by design. Computing paradigms will expand beyond traditional deterministic models to embrace probabilistic, approximate, and hybrid approaches for certain tasks.Those developments will usher in a new era of computing and a new class of applications.

18.03.2026 18:00:05

Technologie a věda
9 dní

Every time you unlock your smartphone or start your connected car, you are generating a trail of digital evidence that can be used to track your every move.In Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance, just published by NYU Press, law professor Andrew Guthrie Ferguson exposes how the Internet of Things has quietly transformed into a vast surveillance network, turning our most personal devices into digital informants. The following excerpt explores the concept of “sensorveillance,” detailing the specific mechanisms—such as Google’s Sensorvault, geofence warrants, and vehicle telemetry—that allow law enforcement to repurpose consumer technology into powerful tools for investigation and control.A man walked into a bank in Midlothian, Va., his black bucket hat pulled low over dark sunglasses. He handed a note to the teller, brandished a gun, and walked away with US $195,000. Police had no leads—but they knew that the robber had been holding a smartphone when he entered the bank. Guessing that the smartphone, like most smartphones, had some Google-enabled service running, police ordered Google to turn over information about all the phones near the bank during the holdup. In response to a series of warrants, Google produced information about 19 phones that had been active near the bank at the time of the robbery. Further investigation directed the police to Okelle Chatrie, who was ultimately charged with the crime.Cathy Bernstein had a tough time explaining why her own car reported an accident to police. Bernstein had been driving a Ford equipped with 911 Assist, which was automatically enabled when she struck another vehicle. Rather than stick around to trade insurance information, she sped away. But her smart car had registered the bump—and called the police dispatcher, leading to a fairly awkward conversation:Computer-Generated Voice: Attention, a crash has occurred. Line open.911 Operator: Hello. Can anyone hear me?Unidentified Woman: Yes, yes.911 Operator: Okay. This is 911. You’ve been involved in an accident.Unidentified Woman: No.911 Operator: Well, your car called in to us because it said you’d been involved in an accident. Are you sure everything’s okay?Unidentified Woman: Everything’s okay.911 Operator: Okay. Are you broke down?Unidentified Woman: No, I’m fine. The guy that hit me—he did not turn.911 Operator: Okay, so you have been involved in an accident.Unidentified Woman: No, I haven’t.911 Operator: Did you hit a car?Unidentified Woman: No, I didn’t.911 Operator: Did you leave the scene of an accident?Unidentified Woman: No. I would never do anything like that.Apparently, Bernstein did do something “like that.” She was soon caught and cited for leaving the scene of the accident. Her own car provided evidence of her guilt.The Rise of “Sensorveillance”Once upon a time, our things were just things. A bike was a tool for biking. It got you from one location to another, but it didn’t “know” more about your travels than any other inanimate object did. It was dumb in a comforting way, and we used it as intended. Today, a top-of-the-line bike can track your route and calculate your average speed along the way. Hop on an e-bike from a commercial bike share, and it will collect data for your trip, plus the trips of everyone else who used it that month.These “smart” objects belong to what technologist Kevin Ashton named the Internet of Things. Ashton proposed adding radio-frequency identification (RFID) tags and sensors to everyday objects, allowing them to collect data that could be fed into networked systems without human intervention. A sensor in a river could monitor the cleanliness of the water. A tag on a bottle of shampoo could trace its journey throughout the supply chain. Add enough sensors to enough objects and you can model the health of an entire ecosystem—or learn whether you’re sending too much of your inventory to Massachusetts and too little to Texas.Ashton first theorized the Internet of Things (IoT) in the late 1990s. Today, the IoT goes well beyond his initial vision, including not only RFID tags but also sensors with Wi-Fi, Bluetooth, cellular, and GPS connections. These small, low-cost sensors record data about movement, heat, pressure, or location and can engage in two-way communication.Of course, such a system is also, by necessity, a system of surveillance. “Sensorveillance”—a term I created to highlight the intersection of sensors and surveillance—is slowly becoming the default across the developed world.Cellphone Surveillance NetworksLet’s start with phones. You’re probably not surprised that your cellphone company tracks your location; that’s how cellphones work. Both smartphones and “dumb” mobile phones use local cell towers, owned by cellphone companies, to connect you to your friends and family, which means those companies know which towers you are near at all times.If you always carry your phone with you, your phone’s whereabouts—recorded as cell-site location information (CSLI)—reveal yours. One man, Timothy Carpenter, found this out the hard way after he and a group of associates set out to rob a series of electronics stores. Carpenter was the alleged ringleader, but he didn’t enter the stores himself. He served as the lookout, waiting in the car while his associates stuffed merchandise into bags.It might have been hard for investigators to tie him to the crimes—if not for the fact that every minute he kept watch, his cellphone was pinging a local tower, logging his location. Using that information, the FBI was able to determine that he had been near each store during the exact moment of each robbery.Cell signals are the tip of the proverbial data iceberg. If you have a smartphone, you’re almost certainly using something created by Google. Google makes money off advertising. The more Google knows about users, the better it can target ads to them. Google’s location services are on all Android phones, which use the company’s operating system, but they’re also on Google apps, including Google Maps and Gmail.For years, all that location information ended up in what the company called the Sensorvault. The Sensorvault, as the name suggests, combined data from GPS, Bluetooth, cell towers, IP addresses, and Wi-Fi signals to create a powerful tracking system that could identify a phone’s location with great precision. As you might imagine, police saw it as a digital evidence miracle. In 2020, Google received more than 11,500 warrants from law enforcement seeking information from the Sensorvault.“Sensorveillance”—a term I created to highlight the intersection of sensors and surveillance—is slowly becoming the default across the developed world.In 2024, Google announced that it would no longer retain all of this data in the cloud. Instead, the geolocation information would be stored on individual devices, requiring police to get a warrant for a specific device. The demise of the Sensorvault came about through a change in corporate policy, which could be reversed. But at least for now, Google has made it significantly harder for police to access its data.And while the Sensorvault was the biggest source of geolocational evidence, it is far from the only one. Even apps that have nothing to do with maps or navigation might nonetheless be collecting your location data. In one Pennsylvania case, prosecutors learned that a burglar used an iPhone flashlight app to search through a home, and they used the data from the app to prove he was in the home at the time of the break-in. These apps might be advertised as “free,” but they come with a hidden cost.Cars, increasingly, collect almost as much information as phones. Mobile extraction devices can collect digital forensics about a car’s speed, when its airbags deployed, when its brakes were engaged, and where it was when all that happened. If you connect your phone to play Spotify or to read out your texts, then your call logs, contact lists, social media accounts, and entertainment selections can be downloaded directly from your vehicle. Because cars are involved in so many crimes (either as the instrument of the crime or as transportation), searches of this data are becoming more commonplace.Even without physically extracting information from the car, police have other ways to get the data. After all, the car’s built-in telemetry system is sharing information with third parties. In addition to the usual personal information you give up when buying a car (name, address, phone number, email, Social Security number, driver’s license number), when you own a Stellantis-brand car, the company collects how often you use the car, your speed, and instances of acceleration or braking. Nissan asserts the right to collect information about “sexual activity, health diagnosis data, and genetic [data]” in addition to “preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.” Nissan’s privacy policy specifically reserves the right to provide this information to both data brokers and law enforcement.The Law of Smart ThingsThe fact that government agents can glean so much information from our things does not mean that they should be able to do so at any time or for any reason. The U.S. Fourth Amendment—drafted in an era without electricity—protects “persons, houses, papers, and effects” against unreasonable search and seizure, but is naturally silent on the question of location data.The first question is whether the data from our smart things should be constitutionally protected from police. In the language of the constitutional text, the smart device itself is an “effect”—a movable piece of personal property. But what about the data collected by the effect? Is the location data collected by your smartwatch considered part of the watch, or part of the person wearing the watch? Neither? Both?To its credit, the U.S. Supreme Court has addressed some of the hard questions around digital tracking. In two cases, the first involving GPS tracking of a car and the second involving the CSLI tracking of Timothy Carpenter’s cellphone, the court has placed limits on the government’s ability to collect location data over the long term.United States v. Jones involved GPS tracking of a car. Antoine Jones owned a nightclub in Washington, D.C. He also sold cocaine and found himself under criminal investigation for a large-scale drug distribution scheme. To prove Jones’s connection to “the stash house,” police placed a GPS device on his wife’s Jeep Cherokee. This was before GPS came standard in cars, so the device was physically attached to the undercarriage of the vehicle.Data about Jones’s travels was recorded for 28 days, during which he visited the stash house multiple times. The prosecutors introduced the GPS data at trial, and Jones was found guilty. Jones appealed his conviction, arguing that the warrantless use of a GPS device to track his car violated his Fourth Amendment rights.“When the Government tracks the location of a cell phone it achieves near perfect surveillance.” — the Supreme CourtIn 2012, the Supreme Court held that a warrant was required, based on the reasoning that the physical placement of the GPS device on the Jeep was itself a Fourth Amendment search requiring a warrant. Justice Sonia Sotomayor agreed regarding the physical search but went further, discussing the harms of long-term GPS tracking: “GPS monitoring generates a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.”Timothy Carpenter’s ill-fated robbery spree gave the Supreme Court another chance to address the constitutional harms of long-term tracking. In their attempts to connect Carpenter to the six electronics stores that had been robbed, federal investigators requested 127 days of location data from two mobile phone carriers. The problem for the police, however, was that they had obtained the information on Carpenter without a judicial warrant.Carpenter challenged the FBI’s acquisition of his CSLI, claiming that it violated his reasonable expectation of privacy. In a 5–4 opinion, the Supreme Court determined that the acquisition of long-term CSLI was a Fourth Amendment search, which required a warrant. As the Court stated in its 2018 ruling: “A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales.... [W]hen the Government tracks the location of a cell phone it achieves near perfect surveillance.”Jones and Carpenter are helpful for setting the boundaries of location-based searches. But, in truth, the cases generate a lot more questions than answers. What about surveillance that is not long-term? At what point does the aggregation of details about a person’s location violate their reasonable expectation of privacy?The Warrant According to GoogleOkelle Chatrie’s case, in which police used Google’s location data to identify him as the mystery bank robber, offers a stark warning about the limits of Fourth Amendment protections under these circumstances. It’s also a terrific example of why “geofence” warrants, which request information within a certain geographic boundary, are appealing to police. From surveillance footage, detectives could see that the suspect had a phone to his ear when he walked into the bank. A geofence could identify who the suspect was, and likely where he came from and where he went. Google held the answer in its virtual vault. A warrant gave investigators the key.The police cast a broad net. The geofence warrant asked for data on all the cellphones within a 150-meter radius, an area, as the court described it, “about three and a half times the footprint of a New York city block.” After receiving the police’s initial request for information on all the phones in the area, Google returned 19 anonymized numbers. Over the course of a three-step warrant process, the company narrowed those 19 phones down to three and then to one, which it revealed as belonging to Okelle Chatrie.If the police wish to buy the data, just like an insurer or marketing firm might, how can you object? It’s not your data.The three-step warrant process is a unique innovation in the digital evidence space. Google’s lawyers developed a procedure whereby detectives seeking targeted geolocation data had to file three separate requests, first requesting identifying numbers in an area, then narrowing the request based on other information, and finally obtaining an order to unmask the anonymous number (or numbers) by providing a name.To be clear, Google—a private company—required the government to jump through these hoops because Google considered it important to protect its customers’ data. It was the company’s lawyers—not the courts or the government—who demanded these warrants.Buying DataWarrants provide at least some procedural barrier to data collection by police. If government agencies want to avoid that minor hassle, they can simply buy the data instead. By contracting with data-location services, several federal agencies have already done so.The logic for this Fourth Amendment loophole is straightforward: You gave your data to a third-party company, and the company can use it as it wishes. If you own a car that is smart enough to collect driving analytics, you clicked some agreement saying the car company could use the data—study it, analyze it, and, if it wants, sell it. If you don’t want to give them data in the first place, that is okay (although it will likely result in less optimal functionality), but you cannot rightly complain when they use the data you gave them in ways that benefit them. If the police wish to buy the data, just like an insurer or marketing firm might, how can you object? It’s not your data.Who Is to Blame?Fears about the amount of personal information that could be revealed with long-term GPS surveillance have become reality. Today, police don’t need to plant a device to track your movements—they can rely on your car or phone to do it for them.This happened because companies sold convenience and consumers bought it. So it might be tempting to blame ourselves. We’re the ones buying this technology. If we don’t want to be tracked, we can always go back to using paper maps and writing down directions by hand. If few of us are willing to make that trade, that’s on us.But it’s not that easy. You may still be able to choose a dumb bike over a smart one, but a car that tracks you will soon be the only type of car you can buy. And while cars and data can, in theory, be separated, that’s not true for all our smart things. Without cell-signal tracking capabilities, a cellphone is just a paperweight. And in today’s world, living without a phone or a car is simply not practical for many people.There are technological steps we can take toward protecting privacy. Companies can localize the data the sensors generate within the devices themselves, rather than in a central location like the Sensorvault. Similarly, the information that allows you to unlock your Apple iPhone via facial recognition stays localized on the phone. These are technological fixes, and positive ones. But even localized data is available to police with a warrant.This is the puzzle of the digital age. We can’t—or don’t want to—avoid creating data, but that data, once created, becomes available for legal ends. The power to track every person is the perfect tool for authoritarianism. For every wondrous story about catching a criminal, there will be a terrifying story of tracking a political enemy or suppressing dissent. Such immense power can and will be abused.

17.03.2026 13:00:05

Technologie a věda
9 dní

As electronics demand higher energy density, one component has proved challenging to shrink: the capacitor. Making a smaller capacitor usually requires thinning the dielectric layer or electrode surface area, which has often resulted in a reduction of power. A new polymer material could help change that.In a study published 18 February in Nature, a Pennsylvania State University–led team reported a capacitor crafted from a polymer blend that can operate at temperatures up to 250 °C while storing roughly four times as much energy as conventional polymer capacitors. Today’s advanced polymer capacitors typically function only up to about 100 °C, meaning engineers often rely on bulky cooling systems in high-power electronics. The research team has filed a patent for the polymer capacitors and plans to bring them to market.Capacitors deliver rapid bursts of energy and stabilize voltage in circuits, making them essential in applications ranging from electric vehicles and aerospace electronics to power-grid infrastructure and AI data centers. Yet while transistors have steadily shrunk with advances in semiconductor manufacturing, passive components such as capacitors and inductors have not scaled at the same pace.“Capacitors can account for 30 to 40 percent of the volume in some power electronics systems,” says Qiming Zhang, an electrical engineering researcher at Penn State and study author, explaining why it’s important to make smaller capacitors.A Plastics Blend More Powerful Than Its PartsThe research team combined two commercially available engineered plastics: polyetherimide (PEI), originally developed by General Electric and widely used in industrial equipment, and PBPDA, known for strong heat resistance and electrical insulation. When processed together under controlled conditions, the polymers self-assemble into nanoscale structures that form thin dielectric films inside capacitors. Those structures help suppress electrical leakage while allowing the material to polarize strongly in an electric field, allowing greater energy storage.The resulting material exhibits an unusually high dielectric constant—a measure of how much electrical energy a material can store. Most polymer dielectrics have values around four, but the blended polymer dielectric in the new work had a value of 13.5.“If you look at the literature up to now, no one has reached this level of dielectric constant in this type of polymer system,” Zhang says. “Putting two commonly used polymers together and seeing this kind of performance was a surprise to many people.”Because the material can remain operational even at elevated temperatures—such as those from extreme environmental heat or hot spots in densely built components—capacitors built from this polymer could potentially store the same amount of energy in a smaller package. “With this material, you can make the same device using about [one-fourth as much] material,” Zhang says. “Because the polymers themselves are inexpensive, the cost does not increase. At the same time, the component can become smaller and lighter.”How the Polymer Mix Improves CapacitorsThe researchers’ finding is “a big advancement,” says Alamgir Karim, a polymer research director at the University of Houston who was not involved in the Penn State development. “Normally when you mix polymers, you don’t expect the dielectric constant to increase.”Karim says the effect likely arises from nanoscale interfaces created when the polymers partially separate. “At about a 50–50 mixture, the polymers don’t fully mix and instead create a very large interfacial area,” he says. “Those interfaces may be where the unusual electrical behavior comes from.”If the material can be produced at scale, it could help address a key bottleneck in high-power electronics. Higher-temperature capacitors could reduce cooling requirements and allow engineers to pack more power into smaller systems—an advantage for aerospace platforms, electric vehicles, the electric grid, and other high-temperature environments.But translating the concept from laboratory methods to commercial manufacturing may present challenges, says Zongliang Xie, a postdoctoral researcher at the Lawrence Berkeley National Laboratory, in California. The Penn State team is now producing small dielectric films, but industrial capacitor manufacturing typically requires continuous rolls of material that can extend for kilometers.“Industry generally prefers extrusion-based processing because it’s easier and cheaper to control,” Xie says. “Scaling to produce great lengths of film while maintaining the same structure and performance could complicate matters. There’s potential, but it’s also challenging.”Still, researchers say the discovery demonstrates that new performance limits may still be unlocked using familiar materials. “Developing the material is only the first step,” Zhang says. “But it shows people that this barrier can be broken.”

17.03.2026 12:00:08

Technologie a věda
9 dní

Looming over the internet lasers and firestarting phones companies were touting at Mobile World Congress in Barcelona this month, was a more nebulous but much larger announcement: a pan-European cloud called EURO-3C.EURO-3C’s backers – Spanish telecoms giant Telefónica, dozens of other European companies, and the European Commission (EC) – aim to fill a gap. U.S.-based cloud giants dominate in the EU, and European policymakers want their growing portfolio of digital government services on a “sovereign cloud” under full EU control.But the EU lacks a real equivalent to the likes of AWS or Microsoft Azure. Indeed, any effort to build one will inevitably run up against the same U.S. cloud giants.Just four U.S.-based hyperscalers – AWS, Microsoft Azure, Google Cloud, and IBM Cloud – together account for some 70 percent of EU cloud services. This is despite the fact that the 2018 U.S. CLOUD Act allows U.S. federal law enforcement – at least in theory – to compel U.S.-based firms to hand over data that’s stored abroad. Who do you trust?But those hypothetical risks to digital services have become more real as transatlantic relations have soured under the second Trump administration. The U.S. has openly threatened to invade an EU member state and sanctioned a European Commissioner for passing legislation the White House dislikes. After the White House sanctioned the Netherlands-based International Criminal Court in February 2025, Court staffers claimed Microsoft locked the Court’s chief prosecutor out of his email (Microsoft has denied this). Around the same time, the U.S. reportedly threatened to sever EU ally Ukraine’s access to crucial Starlink satellite internet as leverage during trade negotiations.“The geopolitical risk isn’t just the most extreme form of a doomsday ‘kill switch’ where Washington turns off Europe’s internet,” Stéfane Fermigier of EuroStack, an industry group that supports European digital independence. “It is the selective degradation of services and a total lack of retaliatory leverage.”What, then, is the EU to do? France offers an example. Even before 2025, France implemented harsh restrictions on non-EU cloud providers in public services – providers must locate data in the EU, rely on EU-based staff, and may not have majority-non-EU shareholders. Now, EU policymakers are following France’s lead.In October 2025, the EC issued a two-part framework for judging cloud providers bidding for public sector contracts. In the first part, the framework lays out a sort of sovereignty ladder. The more that a provider is subject to EU law, the higher its sovereignty level on this ladder. Any prospective bidder must first meet a certain level, depending on the tender.Qualifying bidders then move to the second part, where their “sovereignty” is scored in more detail. Using too much proprietary software; over-relying on supply chains from outside the EU; having non-EU support staff; liability to non-EU laws like the CLOUD Act: all hurt a bidder’s score. The framework was created for one tender, but observers say it sets a major precedent. Cloud providers bidding for state contracts across Europe may need to follow it, and it may influence legislation on both national and EU-wide levels.A question of scaleWho, then, will receive high marks? At the moment, the answer is not simple. The EU cloud scene is quite fragmented. Numerous modest EU providers offer “sovereign cloud” services – such as Scaleway, OVHcloud, and Deutsche Telekom’s T-Systems – but none are on the scale of AWS or Google Cloud.Inertia is on the side of the U.S. cloud giants, who can invest in their infrastructure and services on a far grander scale than their European counterparts. Some U.S. providers now offer cloud services they say comply with the Commission’s “cloud sovereignty” demands.Some European observers, like EuroStack, say such promises are hollow so long as a provider’s parent company is subject to the likes of the CLOUD Act, and loopholes in the Commission’s process remain open. An AWS spokesperson told Spectrum it had not disclosed any non-US enterprise or government data to the U.S. government under the CLOUD Act; a Google spokesperson said that its most sensitive EU offerings “are subject to local laws, not US law”.Even if a project like EURO-3C can offer a large-scale alternative, the US cloud giants have another sort of inertia. Many developers – and many public purchasers of their services – will need convincing to leave behind a familiar environment.“If you look at AWS, you look at Google, they’ve created some super technology. It’s very convenient, it’s easy to use,” says Arnold Juffer, CEO of the Netherlands-based cloud provider Nebul. “Once you’re in that platform, in that ecosystem, it’s very hard to get out.”Martyna Chmura, an analyst at the Bloomsbury Intelligence and Security Institute, a London-based think tank, sees some EU developers taking a mixed approach. “Many organizations are already moving toward multi-cloud setups, using European or sovereign providers for sensitive workloads while still relying on hyperscalers for certain services,” she says.In that case, the EU’s top-down demands may encourage developers to use EU providers for sensitive applications – like government services, transport, autonomous vehicles, and some industrial automation – even if it’s inconvenient in the short term, or if it causes even more fragmentation of the EU cloud scene. “Running systems across different platforms can increase integration costs and make security and data governance more complicated. In some cases, organisations could lose some of the efficiency and cost advantages that come from using large hyperscale platforms,” Chmura says.“Overall, the EU appears willing to accept some of these trade-offs,” Chmura says.

17.03.2026 11:00:06

Technologie a věda
10 dní

In the fictional nation of Beryllia, the 2026 World Chalice Games were set to begin as the country faced an unrelenting heat wave. The grid, already under strain from the circumstances, was dealt a further blow when a coordinated set of attacks including vandalism, drone, and ballistic attacks by an adversary, Crimsonia, crippled the grid’s physical infrastructure.This scenario, inspired by the upcoming 2026 World Cup and the 2028 Olympic Games in Los Angeles, was an exercise in studying how utilities can prevent and mitigate, among other dangers, physical attacks on power grids. Called GridEx, the exercise was hosted by the Electricity Information Sharing and Analysis Center (E-ISAC) from 18 to 20 November, 2025, and was described in a report released on 2 March. GridEx has been held every two years since 2011.“We know that threat actors look to exploit certain circumstances,” says Michael Ball, CEO of E-ISAC, which is a program of the North American Electric Reliability Corporation (NERC), about designing the Beryllia scenario. “The Chalice Games became a good example of how we could build a scenario around a threat actor.”Physical attacks on the grid are rising in the U.S., and GridEx attendance was up in November as utilities grapple with how to prevent and mitigate attacks. Participation in the exercise was at its highest level since 2019, according to the new report. Given the number of organizations present, GridEx estimates that more than 28,000 individual players participated, including utility workers and government partners, an all-time high since the exercise began.Rising Physical Threats to Power GridsThe U.S. and Canadian grids face growing security issues from physical threats, including vandalism, assault of utility workers, intrusion of property, and theft of components, like copper wiring. NERC’s 2025 E-ISAC end of year report cites more than 3,500 physical security breaches that calendar year, about 3 percent of which disrupted electricity. That’s up from 2,800 events cited in the 2023 report (3 percent of those also resulted in electricity disruptions). Yet despite a number of recent high-profile attacks in the U.S., physical attacks on the grid are happening worldwide.“They’re not uniquely a U.S. thing,” says Danielle Russo, executive director of the Center for Grid Security at Securing America’s Future Energy, a nonpartisan organization focused on advancing national energy security. Russo says that while attacks are common in places like Ukraine, they’re not limited to wartime scenarios. “Other countries that are not experiencing direct conflict are experiencing increasing amounts of physical attacks on their energy infrastructure,” she says. Take Germany for example: On 3 January, an arson attack by left-wing activists in Berlin caused a five-day blackout impacting 45,000 households. That comes after a suspected arson attack on two pylons in September 2025 left 50,000 Berlin households without power. Some German officials cite domestic extremism and fears of Russian sabotage in recent years as reasons for heightened security concerns over critical infrastructure. Henrik Beuster, spokesman for grid operator Stromnetz Berlin, stands in front of the Lichterfelde power plant on 7 January after a suspected attack disrupted power supply in the area. Britta Pedersen/picture alliance/Getty Images The uptick in attacks on the U.S. grid has been anchored by a number of incidents in recent years. In December 2025, an engineer in San Jose, California was sentenced to 10 years in prison for bombing electric transformers in 2022 and 2023. A Tennessee man was arrested in November 2024 for attempting to attack a Nashville substation using a drone armed with explosives. And in 2023, a neo-Nazi leader was among two arrested in a plot to attack five substations around Baltimore with firearms, part of an increasing trend in white supremacist groups planning to attack the U.S. energy sector.“Since [E-ISAC] started publishing data back in 2016, we’ve seen a large and consistent increase in the number of reported physical security incidents per year,” says Michael Coe, the vice president of physical and cyber security programs at the American Public Power Association, a trade group that works with E-ISAC to plan GridEx. While not all data is publicly available, Coe says there’s been a “tenfold” increase over the past decade in the number of reported physical attacks on the grid.Drone Attacks: A Grid Security ChallengeDuring the fictional World Chalice Games scenario, drone attacks destroyed Beryllia’s substation equipment, highlighting a threat that’s gained traction as more drones enter the airspace.“The question we get all the time is, how do you tell if it’s a bad actor, or if it’s a 12-year-old kid that got the drone for their birthday?” says Erika Willis, the program manager for the substations team at the Electric Power Research Institute (EPRI).One strategy to track and alert utilities to potential threats such as drones is called sensor fusion. The system includes a pan-tilt-zoom camera capable of 360-degree motion mounted on top of a tripod or pole with four installed radars. The radars combine with the camera for a dual system that can track drones even if they’re obstructed from view, says Willis. For instance, if a nearby drone flies behind a tree, hidden from the camera, the radars will still pick up on it. The technology is currently being tested at EPRI’s labs in Charlotte, North Carolina and Lenox, Massachusetts.EPRI is also exploring how robotics and AI can improve security systems, Willis says. One approach involves integrating AI analysis into robotic technology already surveilling substation perimeters. Using AI can improve detection of break-ins and damage to fencing around substations, Willis says. “As opposed to a human having to go through 200 images of a fence, you can have the AI overlays do some of those algorithms…If the robot has done the inspection of the substation 100 times, it can then relay to you that there’s an anomaly,” Willis says. Prisma Photonics deploys fiber sensing technology that uses reflected optical signals to detect perturbations from vehicles and other sources near underground fiber cable.Prisma PhotonicsAlready, a number of utilities in the U.S. are using AI integrations in their security and monitoring processes. That’s thanks in part to the Tel Aviv, Israel-based Prisma Photonics, a software company that launched in 2017 and has since deployed its fiber sensing technology across thousands of miles of transmission infrastructure in the U.S., Canada, Europe, and Israel. A file-cabinet-sized unit plugs into a substation and sends light pulses down existing fiber optic cables 30 miles in each direction. As the pulses travel down the cables, a tiny fraction of the light is reflected back to the substation unit. An AI model processes the results and can classify events based on patterns in the optical signal as a result of perturbations happening around the fiber cable.“If we identify an event that we don’t have a classification for, and we get a feedback from a customer saying, ‘oh, this was a car crash,’ then we can classify that in the model to say this is actually what happened,” says Tiffany Menhorn, Prisma Photonics’ vice president of North America.As preparations get underway for the ninth GridEx in 2027, Ball says participation in the exercises alone isn’t enough to bolster grid security. Instead, he wants utilities to take what they learn from the training and apply it in their own operations. “It’s the action of doing it, versus our statistic of saying, ‘here’s what our growth was.’ That growth should relate to the readiness and capability of the industry.”

16.03.2026 20:42:45

Technologie a věda
10 dní

The America’s Talent Strategy: Building the Workforce for the Golden Age report, published last year by the U.S. Departments of Commerce, Education, and Labor, identified a significant engineering and skills gap. The 27-page report concluded that the shortage of talent in essential areas—including advanced manufacturing, artificial intelligence, cloud computing, and cybersecurity—poses significant risks to U.S. economic and technological leadership.To help attract talent in those fields, the Labor Department last month introduced incentives for apprenticeships, including a US $145 million “pay for performance” grant program. The funding aims to develop registered apprenticeships in high-demand fields including artificial intelligence and information technology.Reacting to the urgent national need for targeted workforce development were members of IEEE Young Professionals, led by Alok Tibrewala, an IEEE senior member. He is a cochair of the IEEE North Jersey Section’s Young Professionals group.“As a software engineer, this impending shortage concerns me because I believe that the U.S. AI and cybersecurity skills gap would show up first in the early-career pipeline,” Tibrewala says. “Students will be entering the U.S. workforce without enough hands-on experience building secure AI-enabled enterprise and cloud systems, and this gap will persist without practical, mentor-led training before graduation.”Tibrewala led a strategic planning session with representatives from the New Jersey Institute of Technology, IEEE Member and Geographic Activities, and IEEE Young Professionals to discuss holding an event that would provide practical, industry-relevant training by experts and IEEE leaders.“I was able to establish a partnership with NJIT, recruit speakers, design the event’s agenda, and promote the event to ensure it was aligned with the strategy outlined in the workforce report,” he says. “This effort aligns with broader U.S. workforce development priorities focused on industry-driven skills training in critical technology areas.”The IEEE Buildathon event was held on 1 November at NJIT’s Newark campus. More than 30 students and early-career engineers heard from 11 speakers. Through interactive workshops, live demonstrations, and networking opportunities, they left with practical, employer-aligned skills and clearer career pathways for AI-era skills-building.Tibrewala chaired the event and also serves as chair of the IEEE Buildathon program.Session takeawaysRegion 1 Director Bala S. Prasanna, a life senior member, gave the keynote address. He emphasized the need for universities, industry practitioners, and IEEE volunteer leaders to collaborate on programs to enhance technical skills.IEEE Member Kalyani Matey, cochair of the IEEE North Jersey Section’s Young Professionals, conducted a workshop on how to build one’s personal brand and a responsive network. Participants received valuable insights about résumé building, effective communication strategies, and enhancing their visibility and employability.“Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country. With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.” —Alok TibrewalaTibrewala led the Unlocking AI’s Potential: Solving Big Challenges With Smart Data and IEEE DataPort session. The web-based DataPort platform allows researchers to store, share, access, and manage their research datasets in a single, trusted location. He discussed needed skills including AI literacy, strong data handling and dataset stewardship, and turning data into actionable insights.Chaitali Ladikkar, a senior software engineer, delivered the insightful Brains Behind the Game seminar. Ladikkar, an IEEE member, highlighted the transformative impact AI is having on gaming and game engine technologies. She explained how AI is reshaping game development. She also covered how machine learning is being used for animation, faster content generation and testing of new titles. Her seminar received enthusiastic feedback from participants.The Building Better Business Relationships DiSC workshop provided insights into enhancing professional relationships and communication within an engineering workforce. DiSC is a behavioral self-assessment used to understand an individual’s communication style and to adapt to others.Participant experience and testimonialsThe event received high praise from participants for its practical and industry-relevant content, according to Tibrewala.“This training significantly enhanced my understanding and readiness for industry roles, filling gaps my regular academic coursework did not fully address,” said Humna Sultan, an IEEE student member who is a senior studying computer science at Stevens Institute of Technology, in Hoboken, N.J.“The Buildathon was structured around real engineering challenge scenarios that deepened my understanding of AI and cloud technologies,” said Carlos Figueredo, an IEEE graduate student member who is studying data science at the University of Michigan, in Ann Arbor. “It boosted my confidence and practical skills essential for the industry.”Bavani Karthikeyan Janaki said “it was incredible to see how technology and sustainability came together to drive real-world impact, thanks to the dedicated efforts of the organizers including Tibrewala, Matey, and the IEEE North Jersey Young Professionals.” Janaki is pursuing a master’s degree in computer and information science at Long Island University, in New York.Funding and collaborative effortsThe Buildathon was made possible through grants from the IEEE Young Professionals group and funding from the IEEE North Jersey Section and IEEE Member and Geographic Activities. Their support shows how IEEE’s professional organizations can collaborate to address workforce needs by supporting the delivery of technical sessions that strengthen early-career pipelines.Future plans and a call to actionBuilding on the event’s success, Tibrewala and Matey plan to make the IEEE Buildathon an ongoing initiative. They are exploring ways to expand it to additional university campuses and IEEE communities.Tibrewala says they plan to refine the format based on participant feedback and lessons learned. To support consistent quality, he and Matey say, they are working on a playbook for organizers that will include a repeatable agenda, a workshop template, speaker guidelines, and post-event feedback forms.The approach depends on continued coordination among host universities, local IEEE sections, and Young Professional volunteers, Tibrewala says.“Enabling other groups to run similar events,” he says, “can help more students and early-career engineers gain practical exposure to AI, data, cloud, cybersecurity, and other key emerging technologies in a structured setting.“Efforts like this help translate national workforce priorities into real training that students and early-career engineers can apply immediately to their projects. This also helps close the gap between classroom learning and the realities of building secure, reliable systems in production environments. Over time, this kind of structured, employer-aligned training will help increase confidence, employability, and technical readiness across the country.“With sustained support, programs like the IEEE Buildathon can become a practical bridge from education to industry in the AI era.”

16.03.2026 20:00:03

Technologie a věda
13 dní

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! All legged robots deployed “in the wild” to date were given a body plan that was predefined by human designers and could not be redefined in situ. The manual and permanent nature of this process has resulted in very few species of agile terrestrial robots beyond familiar four-limbed forms. Here, we introduce highly athletic modular building blocks and show how they enable the automatic design and rapid assembly of novel agile robots that can “hit the ground running” in unstructured outdoor environments.[ Northwestern UniversityCenter for Robotics and Biosystems ] [ Paper ] via [ Gizmodo ] If you were going to develop the ideal urban delivery robot more or less from scratch, it would be this.[ RIVR ]Don’t get me wrong, there are some clever things going on here, but I’m still having a lot of trouble seeing where the unique, sustainable value is for a humanoid robot performing these sorts of tasks.[ Figure ]One of those things that you don’t really think about as a human, but is actually pretty important.[ Paper ] via [ ETH Zurich ]We propose TRIP-Bag (Teleoperation, Recording, Intelligence in a Portable Bag), a portable, puppeteer-style teleoperation system fully contained within a commercial suitcase, as a practical solution for collecting high-fidelity manipulation data across varied settings.[ KIMLAB ]We propose an open-vocabulary semantic exploration system that enables robots to maintain consistent maps and efficiently locate (unseen) objects in semi-static real-world environments using LLM-guided reasoning.[ TUM ]That’s it folks, we have no need for real pandas anymore—if we ever did in the first place. Be honest, what has a panda done for you lately?[ MagicLab ]RoboGuard is a general-purpose guardrail for ensuring the safety of LLM-enabled robots. RoboGuard is configured offline with high-level safety rules and a robot description, reasons about how these safety rules are best applied in robot’s context, then synthesizes a plan that maximally follows user preferences while ensuring safety.[ RoboGuard ]In this demonstration, a small team responds to a (simulated) radiation contamination leak at a real nuclear reactor facility. The team deploys their reconfigurable robot to accompany them through the facility. As the station is suddenly plunged into darkness, the robot’s camera is hot-swapped to thermal so that it can continue on. Upon reaching the approximate location of the contamination, the team installs a Compton gamma-ray camera and pan-tilt illuminating device. The robot autonomously steps forward, locates the radiation source, and points it out with the illuminator.[ Paper ]On March 6th, 2025, the Robomechanics Lab at CMU was flooded with 4 feet of black water (i.e. mixed with sewage). We lost most of the robots in the lab, and as a tribute my students put together this “In Memoriam” video. It includes some previously unreleased robots and video clips.[ Carnegie Mellon University Robomechanics Lab ]There haven’t been a lot of successful education robots, but here’s one of them.[ Sphero ]The opening keynote from the 2025 Silicon Valley Humanoids Summit: “Insights Into Disney’s Robotic Character Platform,” by Moritz Baecher, Director, Zurich Lab, Disney Research.[ Humanoids Summit ]

13.03.2026 16:00:04

Technologie a věda
13 dní

Raquel Urtasun has spent 16 years in the self-driving space, long enough to navigate every metaphorical glorious hill and plunging valley. She took the trip from the early “pipe dream” dismissals, to the “we’re this close” certainty, and back again.The industry is now riding a new wave of optimism and investment, including at Waabi Innovation Inc., the autonomous trucking company that Urtasun founded in 2021. The Spanish-Canadian professor at the University of Toronto, and former chief scientist of Uber’s Advanced Technologies Group, has helped make Waabi a key player. Beginning in fall 2023, theToronto-based startup has been running geofenced cargo routes from Dallas to Houston in a fleet of retrofitted Peterbilt semis, navigating even residential streets in loaded, 36,000-kilogram (80,000-pound) behemoths with a human “safety observer” on board.In October, the company reached a milestone by integrating its “Waabi Driver” physical-AI system in Volvo’s new VNL Autonomous truck, which the Swedish automaker is building in Virginia. That self-driving solution uses Nvidia’s Drive AGX Thor, an AI-based platform for autonomous and software-defined vehicles. In January, the Toronto-based startup raised $750 million in its latest funding round to accelerate commercial development in autonomous trucking, and expand its system into the fiercely competitive robotaxi space. Backers include Khosla Ventures, Nvidia, and Volvo.Urtasun says the Waabi Driver can scale across a full range of vehicles, geographies and environments—although snowstorms can still create a no-go zone for now. It’s powered by what Urtasun calls the industry’s most advanced neural simulator. The verifiable, end-to-end AI model will be a “shared brain” that partners can transplant into cars, trucks, and pretty much anything on wheels. The idea is to grab a chunk of a global autonomous trucking business that McKinsey estimates could be worth more than $600 billion a year by 2035; with autonomous haulers responsible for 15 percent of total U.S. trucking miles as early as 2030.Backed by an additional $250 million from Uber, Waabi plans to deploy at least 25,000 autonomous taxis through Uber’s ride-hailing service, whose world-dominating reach encompasses 70 countries, about 15,000 cities and more than 200 million monthly users.Urtasun spoke with IEEE Spectrum about how Waabi is counting on sensors and simulation to prove real-world safety; and why the move to autonomy is a moral imperative that outweighs the disruption for human drivers—whether they’re driving trucks or family sedans. Our conversation was edited for length and clarity.The Shift to Next-Gen Autonomous VehiclesIEEE Spectrum: Until quite recently, autonomous tech seemed to have hit a wall, at least in the public’s mind. Now investors are flooding the zone again, and companies are all-in. What happened?Raquel Urtasun: There were a lot of empty promises, or [people] not realizing the complexity of the problem. There was a realization that actually, this problem is harder than people anticipated. It’s also because of the type of technology that was developed at the time, what we call “AV 1.0”. These are hand-engineered systems that need to be brute-forced by humans. You need lots of capital and a massive amount of miles on the road just to get to the first deployment.What you see with the next generation—AV 2.0 and systems that can reason—is that you finally have a solution that scales. When we started the company, this was a very contrarian view. But today, the breakthroughs in AI have made it clear that this is the next big revolution. It’s not just about more compute; it’s about building a brain that can generalize. That is the “aha moment” the industry is having now.Even for someone who believes in the tech, seeing a driverless semi-trailer in your rear-view mirror might be unsettling. Now you’ve integrated your tech into the aerodynamic, diesel-powered Volvo VNL Autonomous truck. How do you convince regulators and the public that these trucks belong on the street? Urtasun: Safety, when you think about carrying 80,000 pounds on this massive rig, is definitely top of mind. We believe the only way to do this safely is with a redundant platform that is fully developed and validated by the OEM, not with a retrofit. The OEM does a special type of truck that has all the redundant steering, power, and braking, so that no matter what happens, there is always a way we can interface and activate that truck in a safe manner. Then we are responsible for the sensors, the compute, and obviously the brain that drives those trucks.AI’s Impact on Trucking JobsOne of the biggest points of contention is the displacement of human drivers. As AI disrupts a range of workplaces, how do respond to people who say this will eliminate good-paying, blue-collar jobs?Urtasun: The way we see this is that everybody who’s a truck driver today, and wants to retire as a truck driver, will be able to do so. This is physical AI; this is not like the digital world where suddenly you can switch immediately to this technology. That adoption and scaling is going to take time. There will also be many jobs created with this technology; remote operations, terminal operations, and other things. You have time to change the form of labor of being on the road, which is for weeks at a time—and it’s a really difficult and dehumanized job, let’s be honest—to something you can do locally. There was an interesting [U.S.] Department of Transportation study that showed because of this gradual adoption, there will be more jobs created than actually removed.You’ve spoken about a personal motivation behind this. Why do you believe the advantages of autonomy outweigh any growing pains, including the potential for unexpected accidents or even deaths?Urtasun: There are 2 million deaths on the road globally per year, and nobody’s questioning that. That’s the status quo. If you think the machines have to be perfect to deploy, you are actually sacrificing many humans along the way that you could have saved. Human error in accidents is between 90 percent and 96 percent. Those could be preventable accidents. Some accidents will always be unavoidable; a tire could blow for a machine the same as it could for a human. But the important comparison is how much safer we are. This technology is the answer to many, many things.Most of the industry is focused on “hub-to-hub” highway driving. But you’ve argued that Waabi’s AI can handle the complexity of local streets.Urtasun: The rest of the industry has gone with this business model where you need hubs next to the highway. This adds a lot of friction and cost. Thanks to our verifiable end-to-end AI system, we can drive in surface [local] streets. We can do unprotected lefts, traffic lights, and tight turns. These core capabilities enable us to drive all the way to the end customer. We are already hauling commercial loads for customers like Samsung through our Uber Freight partnership.You’ve mentioned that Waabi doesn’t like to talk about “number of miles” driven as a metric. For an engineering audience, that sounds counterintuitive. How does your “simulation-first” approach replace the need for real-world road time?Urtasun: In the industry, miles have been used as a proxy for advancement. How many miles does Tesla need to drive to see any of these situations? But we are a simulation-first company. Waabi World can simulate all the sensors, the behaviors of humans, everything. It is the only simulator where you can mathematically prove that testing and driving in simulation is the same as driving in the real world. You can expose the system to billions of simulations in the cloud. This is what allows us to be so capital efficient and fast.Verifiable AI vs. Black Box SystemsWhat is the difference between your “interpretable” AI and the “black box” systems we see elsewhere?Urtasun: We’ve seen an evolution on passenger cars for level- 2+ systems to end-to-end, black box architectures. But those are not verifiable. You cannot validate and verify those systems, which is a massive problem when you think about regulators and OEMs trusting that technology.What Waabi has built is end-to-end, but fully verifiable. The system is forced to interpret what it is perceiving and use those interpretations for reasoning, so that it can understand the consequences of every action. It is much more akin to how our brain actually works; your “Type 2” thinking, where you start thinking about cause and effect and consequences, and then you typically do a much better choice in your maneuver.Tesla is famously, and controversially, relying on camera data almost exclusively to run and improve its self-driving systems. You’re not a fan of that approach?Urtasun: We use multiple sensors: lidar, camera, and radar. That’s very important because failure modes of those sensors are very different and they’re very complementary. We don’t compromise safety to reduce the bill- of- materials cost today.Those (passenger car) level-2+ systems are not architected for level 4, where there’s no human on board. People don’t necessarily realize there is a huge difference in terms of the bar when there is no human to rely on. It’s not, “Well, if I don’t have a lot of system interventions, I’m almost there.” That’s not a metric. We are native level 4. We decide which areas the system can drive in, and in what conditions. We are building technology that can drive different form factors—trucks or robotaxis—with the same brain.Editor’s note: This article was updated on 13 March to correct an error in the original post. Contrary to what was stated in the original post, the trucks being driven from Dallas to Houston do have a human observer on board.

13.03.2026 13:01:02

Zahrádkaření

Zprava i zleva

Zprava i zleva
7 dní

Zábava