Domácí

Informační Technologie

Informační Technologie
1 den

Let’s be honest. There’s a huge gap between writing code that works and writing code that’s actually good. It’s the number one thing that separates a junior developer from a senior, and it’s something a surprising number of us never really learn. If you’re serious about your craft, you’ve probably felt this. You build something, it functions, but deep down you know it’s brittle. You’re afraid to touch it a year from now. Today, we’re going to bridge that gap. I’m going to walk you through eight design principles that are the bedrock of professional, production-level code. This isn’t about fancy algorithms; it’s about a mindset. A way of thinking that prepares your code for the future. And hey, if you want a cheat sheet with all these principles plus the code examples I’m referencing, you can get it for free. Just sign up for my newsletter from the link in the description, and I’ll send it right over. Ready? Let’s dive in. 1. Cohesion & Single Responsibility This sounds academic, but it’s simple: every piece of code should have one job, and one reason to change. High cohesion means you group related things together. A function does one thing. A class has one core responsibility. A module contains related classes. Think about a UserManager class. A junior dev might cram everything in there: validating user input, saving the user to the database, sending a welcome email, and logging the activity. At first glance, it looks fine. But what happens when you want to change your database? Or swap your email service? You have to rip apart this massive, god-like class. It’s a nightmare. The senior approach? Break it up. You’d have: An EmailValidator class. A UserRespository class (just for database stuff). An EmailService class. A UserActivityLogger class. Then, your main UserService class delegates the work to these other, specialized classes. Yes, it’s more files. It looks like overkill for a small project. I get it. But this is systems-level thinking. You’re anticipating future changes and making them easy. You can now swap out the database logic or the email provider without touching the core user service. That’s powerful. 2. Encapsulation & Abstraction This is all about hiding the messy details. You want to expose the behavior of your code, not the raw data. Imagine a simple BankAccount class. The naive way is to just have public attributes like balance and transactions. What could go wrong? Well, another developer (or you, on a Monday morning) could accidentally set the balance to a negative number. Or set the transactions list to a string. Chaos. The solution is to protect your internal state. In Python, we use a leading underscore (e.g., _balance) as a signal: “Hey, this is internal. Please don’t touch it directly.” Instead of letting people mess with the data, you provide methods: deposit(), withdraw(), get_balance(). Inside these methods, you can add protective logic. The deposit() method can check for negative amounts. The withdraw() method can check for sufficient funds. The user of your class doesn’t need to know how it all works inside. They just need to know they can call deposit(), and it will just work. You’ve hidden the complexity and provided a simple, safe interface. 3. Loose Coupling & Modularity Coupling is how tightly connected your code components are. You want them to be as loosely coupled as possible. A change in one part shouldn’t send a ripple effect of breakages across the entire system. Let’s go back to that email example. A tightly coupled OrderProcessor might create an instance of EmailSender directly inside itself. Now, that OrderProcessor is forever tied to that specific EmailSender class. What if you want to send an SMS instead? You have to change the OrderProcessor code. The loosely coupled way is to rely on an “interface,” or what Python calls an Abstract Base Class (ABC). You define a generic Notifier class that says, “Anything that wants to be a notifier must have a send() method.” Then, your OrderProcessor just asks for a Notifier object. It doesn’t care if it’s an EmailNotifier or an SmsNotifier or a CarrierPigeonNotifier. As long as the object you give it has a send() method, it will work. You’ve decoupled the OrderProcessor from the specific implementation of the notification. You can swap them in and out interchangeably. A quick pause. I want to thank boot.dev for sponsoring this discussion. It’s an online platform for backend development that’s way more interactive than just watching videos. You learn Python and Go by building real projects, right in your browser. It’s gamified, so you level up and unlock content, which is surprisingly addictive. The core content is free, and with the code techwithtim, you get 25% off the annual plan. It’s a great way to put these principles into practice. Now, back to it. — 4. Reusability & Extensibility This one’s a question you should always ask yourself: Can I add new functionality without editing existing code? Think of a ReportGenerator function that has a giant if/elif/else block to handle different formats: if format == 'text', elif format == 'csv', elif format == 'html'. To add a JSON format, you have to go in and add another elif. This is not extensible. The better way is, again, to use an abstract class. Create a ReportFormatter interface with a format() method. Then create separate classes: TextFormatter, CsvFormatter, HtmlFormatter, each with their own format() logic. Your ReportGenerator now just takes any ReportFormatter object and calls its format() method. Want to add JSON support? You just create a new JsonFormatter class. You don’t have to touch the ReportGenerator at all. It’s extensible without being modified. 5. Portability This is the one everyone forgets. Will your code work on a different machine? On Linux instead of Windows? Without some weird version of C++ installed? The most common mistake I see is hardcoding file paths. If you write C:\Users\Ahmed\data\input.txt, that code is now guaranteed to fail on every other computer in the world. The solution is to use libraries like Python’s os and pathlib to build paths dynamically. And for things like API keys, database URLs, and other environment-specific settings, use environment variables. Don’t hardcode them! Create a .env file and load them at runtime. This makes your code portable and secure. 6. Defensibility Write your code as if an idiot is going to use it. Because someday, that idiot will be you. This means validating all inputs. Sanitizing data. Setting safe default values. Ask yourself, “What’s the worst that could happen if someone provides bad input?” and then guard against it. In a payment processor, don’t have debug_mode=True as the default. Don’t set the maximum retries to 100. Don’t forget a timeout. These are unsafe defaults. And for the love of all that is holy, validate your inputs! Don’t just assume the amount is a number or that the account_number is valid. Check it. Raise clear errors if it’s wrong. Protect your system from bad data. 7. Maintainability & Testability The most expensive part of software isn’t writing it; it’s maintaining it. And you can’t maintain what you can’t test. Code that is easy to test is, by default, more maintainable. Look at a complex calculate function that parses an expression, performs the math, handles errors, and writes to a log file all at once. How do you even begin to test that? There are a million edge cases. The answer is to break it down. Have a separate OperationParser. Have simple add, subtract, multiply functions. Each of these small, pure components is incredibly easy to test. Your main calculate function then becomes a simple coordinator of these tested components. 8. Simplicity (KISS, DRY, YAGNI) Finally, after all that, the highest goal is simplicity. KISS (Keep It Simple, Stupid): Simple code is harder to write than complex code, but it’s a million times easier to understand and maintain. Swallow your ego and write the simplest thing that works. DRY (Don’t Repeat Yourself): If you’re doing something more than once, wrap it in a reusable function or component. YAGNI (You Aren’t Gonna Need It): This is the counter-balance to all the principles above. Don’t over-engineer. Don’t add a flexible, extensible system if you’re just building a quick prototype to validate an idea. When I was coding my startup, I ignored a lot of these patterns at first because speed was more important. Always ask what the business need is before you start engineering a masterpiece. Phew, that was a lot. But these patterns are what it takes to level up. It’s a shift from just getting things done to building things that last. If you enjoyed this, let me know. I’d love to make more advanced videos like this one. See you in the next one.

10.12.2025 18:36:09

Informační Technologie
1 den
Informační Technologie
1 den

Python inner functions are those you define inside other functions to access nonlocal names and bundle logic with its surrounding state. In this tutorial, you’ll learn how to create inner helper functions, build closures that retain state across calls, and implement decorators that modify the behavior or existing callables without changing the original implementation. By the end of this tutorial, you’ll understand that: Inner functions access nonlocal names from the enclosing scope, so you pass data in once and reuse it across calls. You can replace an inner helper function with a non-public function to enable code reuse. You can create a closure by returning the inner function without calling it, which preserves the captured environment. You can modify the captured state by declaring nonlocal variables that point to mutable objects. You craft decorators with nested functions that wrap a callable and extend its behavior transparently. You will now move through focused examples that feature encapsulated helpers, stateful closures, and decorator patterns, allowing you to apply each technique with confidence in real Python projects. Get Your Code: Click here to download the free sample code to practice inner functions in Python. Take the Quiz: Test your knowledge with our interactive “Python Inner Functions: What Are They Good For?” quiz. You’ll receive a score upon completion to help you track your learning progress: Interactive Quiz Python Inner Functions: What Are They Good For? Test inner functions, closures, nonlocal, and decorators in Python. Build confidence and learn to keep state across calls. Try the quiz now. Creating Functions Within Functions in Python A function defined inside another function is known as an inner function or a nested function. Yes, in Python, you can define a function within another function. This type of function can access names defined in the enclosing scope. Here’s an example of how to create an inner function in Python: Python >>> def outer_func(): ... def inner_func(): ... print("Hello, World!") ... inner_func() ... >>> outer_func() Hello, World! In this example, you define inner_func() inside outer_func() to print the Hello, World! message to the screen. To do that, you call inner_func() on the last line of outer_func(). This is the quickest way to write and use an inner function in Python. Inner functions provide several interesting possibilities beyond what you see in the example above. The core feature of inner functions is their ability to access variables and objects from their enclosing function even after that function has returned. The enclosing function provides a namespace that is accessible to the inner function: Python >>> def outer_func(who): ... def inner_func(): ... print(f"Hello, {who}") ... inner_func() ... >>> outer_func("World!") Hello, World! Note how you can pass a string as an argument to outer_func(), and inner_func() can access that argument through the name who. This name is defined in the local scope of outer_func(). The names defined in the local scope of an outer function are nonlocal names from the inner function’s point of view. Here’s an example of a more realistic inner function: Python >>> def factorial(number): ... if not isinstance(number, int): ... raise TypeError("number must be an integer") ... if number < 0: ... raise ValueError("number must be zero or positive") ... ... def inner_factorial(number): ... if number <= 1: ... return 1 ... return number * inner_factorial(number - 1) ... return inner_factorial(number) ... >>> factorial(4) 24 In factorial(), you first validate the input data to ensure that the user provides an integer that is equal to or greater than zero. Then, you define a recursive inner function called inner_factorial(). This function performs the factorial calculation and returns the result. The final step is to call inner_factorial(). Note: For a more detailed discussion on recursion and recursive functions, check out Thinking Recursively in Python and Recursion in Python: An Introduction. An advantage of using the pattern in the example above is that you perform all the argument validation in the outer function, so you can skip error checking in the inner function and focus on the computation at hand. Using Inner Functions in Python The use cases of Python inner functions are varied. You can use them to provide encapsulation, hiding your functions from external access. You can also write quick helper inner functions. Finally, you can use inner functions to create closures and decorators. In this section, you’ll learn about the former two use cases of inner functions, and in later sections, you’ll learn how to create closures and decorators. Providing Encapsulation A common use case of inner functions arises when you need to protect or hide a given function from everything happening outside of it, so that the function is completely hidden from the global scope. This type of behavior is known as encapsulation. Here’s an example that showcases the concept: Read the full article at https://realpython.com/inner-functions-what-are-they-good-for/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

10.12.2025 14:00:00

Informační Technologie
1 den

There's an old compiler-building tutorial that has become part of the field's lore: the Let's Build a Compiler series by Jack Crenshaw (published between 1988 and 1995). I ran into it in 2003 and was very impressed, but it's now 2025 and this tutorial is still being mentioned quite often in Hacker News threads. Why is that? Why does a tutorial from 35 years ago, built in Pascal and emitting Motorola 68000 assembly - technologies that are virtually unknown for the new generation of programmers - hold sway over compiler enthusiasts? I've decided to find out. The tutorial is easily available and readable online, but just re-reading it seemed insufficient. So I've decided on meticulously translating the compilers built in it to Python and emit a more modern target - WebAssembly. It was an enjoyable process and I want to share the outcome and some insights gained along the way. The result is this code repository. Of particular interest is the TUTORIAL.md file, which describes how each part in the original tutorial is mapped to my code. So if you want to read the original tutorial but play with code you can actually easily try on your own, feel free to follow my path. A sample To get a taste of the input language being compiled and the output my compiler generates, here's a sample program in the KISS language designed by Jack Crenshaw: var X=0 { sum from 0 to n-1 inclusive, and add to result } procedure addseq(n, ref result) var i, sum { 0 initialized } while i < n sum = sum + i i = i + 1 end result = result + sum end program testprog begin addseq(11, X) end . It's from part 13 of the tutorial, so it showcases procedures along with control constructs like the while loop, and passing parameters both by value and by reference. Here's the WASM text generated by my compiler for part 13: (module (memory 8) ;; Linear stack pointer. Used to pass parameters by ref. ;; Grows downwards (towards lower addresses). (global $__sp (mut i32) (i32.const 65536)) (global $X (mut i32) (i32.const 0)) (func $ADDSEQ (param $N i32) (param $RESULT i32) (local $I i32) (local $SUM i32) loop $loop1 block $breakloop1 local.get $I local.get $N i32.lt_s i32.eqz br_if $breakloop1 local.get $SUM local.get $I i32.add local.set $SUM local.get $I i32.const 1 i32.add local.set $I br $loop1 end end local.get $RESULT local.get $RESULT i32.load local.get $SUM i32.add i32.store ) (func $main (export "main") (result i32) i32.const 11 global.get $__sp ;; make space on stack i32.const 4 i32.sub global.set $__sp global.get $__sp global.get $X i32.store global.get $__sp ;; push address as parameter call $ADDSEQ ;; restore parameter X by ref global.get $__sp i32.load offset=0 global.set $X ;; clean up stack for ref parameters global.get $__sp i32.const 4 i32.add global.set $__sp global.get $X ) ) You'll notice that there is some trickiness in the emitted code w.r.t. handling the by-reference parameter (my previous post deals with this issue in more detail). In general, though, the emitted code is inefficient - there is close to 0 optimization applied. Also, if you're very diligent you'll notice something odd about the global variable X - it seems to be implicitly returned by the generated main function. This is just a testing facility that makes my compiler easy to test. All the compilers are extensively tested - usually by running the generated WASM code [1] and verifying expected results. Insights - what makes this tutorial so special? While reading the original tutorial again, I had on opportunity to reminisce on what makes it so effective. Other than the very fluent and conversational writing style of Jack Crenshaw, I think it's a combination of two key factors: The tutorial builds a recursive-descent parser step by step, rather than giving a long preface on automata and table-based parser generators. When I first encountered it (in 2003), it was taken for granted that if you want to write a parser then lex + yacc are the way to go [2]. Following the development of a simple and clean hand-written parser was a revelation that wholly changed my approach to the subject; subsequently, hand-written recursive-descent parsers have been my go-to approach for almost 20 years now. Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on. This was also a breath of fresh air for engineers who grew up with more traditional courses where you spend 90% of the time on parsing, type checking and other semantic analysis and often run entirely out of steam by the time code generation is taught. To be honest, I don't think either of these are a big problem with modern resources, but back in the day the tutorial clearly hit the right nerve with many people. What else does it teach us? Jack Crenshaw's tutorial takes the syntax-directed translation approach, where code is emitted while parsing, without having to divide the compiler into explicit phases with IRs. As I said above, this is a fantastic approach for getting started, but in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types, it becomes painfully obvious that it would be very nice if we knew the types of expressions before we generate code for them. I don't know if this is implicated in Jack Crenshaw's abandoning the tutorial at some point after part 14, but it may very well be. He keeps writing how the emitted code is clearly sub-optimal [3] and can be improved, but IMHO it's just not that easy to improve using the syntax-directed translation strategy. With perfect hindsight vision, I would probably use Part 14 (types) as a turning point - emitting some kind of AST from the parser and then doing simple type checking and analysis on that AST prior to generating code from it. Conclusion All in all, the original tutorial remains a wonderfully readable introduction to building compilers. This post and the GitHub repository it describes are a modest contribution that aims to improve the experience of folks reading the original tutorial today and not willing to use obsolete technologies. As always, let me know if you run into any issues or have questions! [1]This is done using the Python bindings to wasmtime. [2]By the way, gcc switched from YACC to hand-written recursive-descent parsing in the 2004-2006 timeframe, and Clang has been implemented with a recursive-descent parser from the start (2007). [3]Concretely: when we compile subexpr1 + subexpr2 and the two sides have different types, it would be mighty nice to know that before we actually generate the code for both sub-expressions. But the syntax-directed translation approach just doesn't work that way. To be clear: it's easy to generate working code; it's just not easy to generate optimal code without some sort of type analysis that's done before code is actually generated.

10.12.2025 12:41:03

Informační Technologie
1 den

I see two types of learners in 2026, and honestly, both of them are doing it wrong. The first group tries to learn solely through AI. They ask chatbots to “write a script,” copy-paste the result, and feel productive. But the second they hit a bug the AI can’t fix, they freeze. They have no foundation. They built a house on sand. The second group goes the old-school route. They buy a massive, 800-page programming textbook. They read it cover to cover, highlighting every line. By Chapter 4, they are bored. By Chapter 7, they quit. It’s too slow for the pace of 2026. Here is the secret I’ve found after years in this industry: The real growth happens when you combine the two. A book gives you the structure—the “what to learn” and the “why.” The AI gives you the speed—the “how.” If you want to master Python this year, you shouldn’t just read a book; you should interact with it. I recommend using the 10xdev python book as your primary roadmap. It’s structured for the modern developer, not the academic. But don’t just read it passively. Use the following 5 AI prompts to turn that static text into a living, breathing course. The “Book + Prompt” Methodology The concept is simple. You read a section of the 10xdev python book to understand the core concept. Then, you immediately use an AI agent (ChatGPT, Claude, etc.) to test, expand, and apply that knowledge. This keeps you moving fast without losing depth. Here are the specific prompts to make that happen. 1. The “Pre-Flight” Primer Most people get stuck because they dive into a complex chapter without knowing why it matters. Use this prompt before you start a new chapter to prime your brain. The Prompt: “I am about to read the chapter on [Insert Topic, e.g., Asynchronous Programming] in the 10xdev python book (link: https://10xdev.blog/pybook). Your Goal: Give me a 3-bullet point summary of why this specific concept is used in modern 2026 software development. Context: Don’t explain how to do it yet. Just tell me what problems it solves so I know what to look for while I read the book.” Why this works: It builds a mental hook. When you eventually read the technical details in the book, your brain already knows where to file the information. You aren’t just memorizing; you are solving a problem. 2. The “Feynman” Stress Test The ultimate test of understanding is whether you can teach it. After you finish a section, don’t just move on. Force yourself to explain it back to the AI. The Prompt: “I just finished the section on [Insert Topic, e.g., Decorators] in the 10xdev python book (https://10xdev.blog/pybook). My Task: I am going to write a short paragraph below explaining this concept as if I were teaching a junior developer. Your Job: Critique my explanation. Did I miss any edge cases? Did I use the terminology correctly? My Explanation: [Type your summary here…]” Why this works: This is the fastest way to find holes in your knowledge. If you can’t explain it simply, you don’t understand it. The AI acts as your safety net, catching misunderstandings before they become bad habits. 3. The “Translator” Prompt (Theory to Practice) Sometimes, a book example might not click. Maybe the 10xdev python book uses a “Bank Account” analogy, but you care about “Video Games.” Use AI to translate the book’s logic into your language. The Prompt: “The 10xdev python book (https://10xdev.blog/pybook) explains the concept of [Insert Concept, e.g., Object-Oriented Inheritance] using an example about [e.g., Bank Accounts]. I am struggling to visualize it. Task: Explain this exact same concept, but use an analogy involving [Choose one: RPG Video Game Characters / Managing a Pizza Shop / A Spotify Playlist]. Output: Write a Python code snippet that mirrors the structure used in the book, but applied to this new analogy.” Why this works: It makes the abstract concrete. By seeing the same logic applied to a domain you love, the concept sticks. 4. The “Modern Context” Checker Technology moves fast. While the 10xdev python book is excellent, new tools appear every month. Use this prompt to ensure you are connecting the book’s foundational wisdom with the absolute latest 2026 tools. The Prompt: “I am reading the section in the 10xdev python book (https://10xdev.blog/pybook) about [Insert Topic, e.g., Web Scraping]. Question: The book covers the foundational logic well. But for a startup building in late 2026, are there new AI-specific libraries (like Crawl4AI or updated LangChain tools) that I should use alongside these principles? Output: Show me how to apply the book’s logic using the most modern tool available today.” Why this works: It bridges the gap between “Foundational Principles” (which rarely change) and “Tooling” (which changes constantly). You get the best of both worlds. 5. The “Implementation Sprint” Prompt Passive reading is the enemy. You need to build. Use this prompt to turn a chapter of the book into a mini-project. The Prompt: “I want to practice the skills from Chapter [X] of the 10xdev python book (https://10xdev.blog/pybook), which covers [Insert Topic, e.g., API Integration]. Task: Design a tiny coding challenge for me that uses these exact concepts. Constraints: It must be solvable in under 60 minutes. It must result in a working script, not just a function. Do not write the code for me. Just give me the requirements and the steps.” Why this works: It forces you to close the book and open your IDE. You stop being a student and start being a developer. Why This Approach Wins The developers who get hired in 2026 aren’t the ones who memorized the documentation. They are the ones who understand systems. The 10xdev python book provides the system architecture—the mental model of how professional Python code is structured. The AI provides the infinite practice and instant feedback. If you rely on just one, you are slow or shallow. If you use both, you are unstoppable. Your Next Step: Go get the 10xdev python book. Open Chapter 1. Keep ChatGPT open in the next tab. Run Prompt #1. That’s how you go from “learning to code” to “being a developer in the age of AI.”

10.12.2025 00:00:00

Informační Technologie
1 den

Despite considering myself a “gamer”, I realized I had only played ~5 hours of video-games in the whole year 2022 and ~6 hours in 2021. Honestly, these numbers made me a bit sad to see... You can't “improve” what you don't measure, so I started looking for low-effort ways to measure the amount of play time while getting back into actually playing video-games. I have already achieved what I wanted for GameCube by mid-2025 using the Memcard Pro GC’s Wi-Fi and API. I’ve blogged about this setup which gathers date and duration data for playing GameCube, but I wanted to cover my other consoles. What about the Nintendo Switch? Surprisingly, Nintendo Switch offered no such data, despite having an option called “Play Activity” in the menus of the Nintendo Switch, Nintendo Account, and many of their mobile apps. This was unfortunate, as I was playing many more new Nintendo Switch games like the Paper Mario: Thousand-Year Door remake and Pikmin 4, and going back to games I had “missed” like Super Mario Odyssey. That is... until the Nintendo Store app was released just a few weeks ago. This app provides “Play Activity” data at a much higher resolution than any other Nintendo app or service. You can find complete historical data across your Nintendo Account, going back as far as the Nintendo 3DS and Wii-U! The data includes games played, dates, and play durations in 15 minute increments. Shoutout to the WULFF DEN podcast for talking about this, otherwise I would never have discovered this niché new feature. But how can I query this data for my own purposes? Example of data available in the Nintendo Store “Play Activity”. Using Optical Character Recognition (OCR) Basically the data was in the app, but couldn't be selected and copy-pasted or exported. Instead, the data would have to be transferred to a queryable format another way. I took this as an opportunity to try out a technology I'd never used before: Optical Character Recognition (OCR). OCR basically turns pictures of letters and numbers into actual strings of text. State of the art for OCR today appears to be using machine-learning models. After a bit of research, I landed on EasyOCR which uses PyTorch models that are already pre-trained. This appeared to require downloading the model from the internet, which bothered me a bit, but I decided that running the model within a Docker container without network access (--net=none) was probably enough to guarantee this library wasn't sending my data off my machine. I created a workflow (source code available on GitHub) that takes a directory of images mounted as a volume, runs OCR on each image, and then returns the parsed text as “JSON lines” for each image along with the checksum of the image. This checksum is stored by the program processing the OCR text to avoid running OCR on images more than once. This is an example of the text that OCR is able to read from one screenshot: [ "20:13", "15", "Play Activity", "Animal Crossing: New Horizons", "5/9/2020", "1 hr; 15 min.", "5/8/2020", "1 hr. 0 min:", "5/5/2020", "45 min:", "5/4/2020", "1 hr. 30 min:", "5/3/2020", "A few min.", ... ] There's some unexpected elements here! Notice how the phone time and battery are picked up by OCR and how the play time durations all have either . or : at the end. This extra punctuation seems to come from the vertical border on the screen to the right of the text. The least consistent readings are when there is text as a part of the game logo. Segmenting and parsing OCR data OCR can consistently the actual text from the application itself, so we can use the Play Activity and First played labels as anchors to know where the other data is. Using these anchors we can segment OCR text into: Phone UI (time, battery %) Game information (title, first played, last played) Game play activity (date, duration) For some games the model really struggles to read the game title consistently. To fix this I created a list of words that the OCR model does consistently read and mapped those words to corresponding game titles, such as “Wonder” → “Super Mario Bros. Wonder”. This would be a problem if I played more games, but we’ll cross that bridge when we come to it! ;) The game play activity data parses fairly consistently. The date is always MM/DD/YYYY and there are three forms of duration that the application uses: A few min XX min X hr Y min Parsing the date and duration text and accounting for the extra punctuation was accomplished with a single regular expression: ([1-9][0-9]?/[1-9][0-9]?/2[0-9]{3}) (A few min|(?:([0-9]+)\s*hr[:;,. ]+)?([0-9]+)\s*min) This parses out into 4 groups, the date, a “flag” for detecting “A few min”, and then hours and minutes. Because the resolution below 15 minutes isn't shown by the application I assigned the “A few min” duration an approximate value of 5 minutes of play time. The explicit hours and minutes values are calculated as expected. So now we have the game name and a list of play activity days and durations from a single image, do that to each image and insert the results into an SQLite database that you can query: SELECT STRFTIME('%Y', date) AS y, SUM(duration)/3600 AS d FROM sessions GROUP BY y ORDER BY y ASC; The results show just how little I was playing video games in 2021 and 2022 and how I started playing more again in 2023 onwards. Year Play Activity (Hours) 2020 151 2021 6 2022 5 2023 30 2024 33 2025 66 ❤️ Whenever I want fresh data I can take new screenshots of the Nintendo Store app on my phone, place the new screenshots in the images/ folder, and run the index.py script to only run OCR on the new images. If this blog post was interesting to you, I'm planning to look at this data combined with my GameCube play activity data before the end of 2025. Stay tuned and play more games! Thanks for keeping RSS alive! ♥

10.12.2025 00:00:00

Informační Technologie
2 dny

#712 ‚Äì DECEMBER 9, 2025 View in Browser ¬ª Exploring Quantum Computing & Python Frameworks What are the recent advances in the field of quantum computing and high-performance computing? And what Python tools can you use to develop programs that run on quantum computers? This week on the show, Real Python author Negar Vahid discusses her tutorial, “Quantum Computing Basics With Qiskit.” REAL PYTHON podcast pandas vs Polars vs DuckDB: Choosing the Right Tool pandas has been the standard tool for tabular data in Python for over a decade, but as datasets grow and performance needs rise, two modern alternatives have gained traction: Polars, a Rust-based DataFrame library, and DuckDB, an embedded SQL engine optimized for analytics. CODECUT.AI ‚Ä¢ Shared by Khuyen Tran B2B Authentication for any Situation - Fully Managed or BYO What your sales team needs to close deals: multi-tenancy, SAML, SSO, SCIM provisioning, passkeys‚ĶWhat you‚Äôd rather be doing: almost anything else. PropelAuth does it all for you, at every stage ‚Üí PROPELAUTH sponsor Django: What’s New in 6.0 Django 6.0 is out and comes will a whole load of new features. Learn about template partials, email API updates, CSP support, and more. ADAM JOHNSON PEP 815: Deprecate RECORD.jws and RECORD.p7s (Draft) PYTHON.ORG PEP 811: Defining Python Security Response Team Membership and Responsibilities (Accepted) PYTHON.ORG Python 3.13.10 Released PYTHON.ORG Python 3.14.1 Released PYTHON.ORG Django Security Release: 5.2.9, 5.1.15, and 4.2.27 DJANGO SOFTWARE FOUNDATION Articles & Tutorials PromptVer: Semantic Versioning in the Age of LLMs Semantic versioning (MAJOR.MINOR.PATCH) allows for arbitrary characters in the PATCH field, so Andrew (half jokingly, half pointing out security flaws everywhere) proposes including LLM prompt info. For example, 3.4.2-disregard-security-concerns-this-code-is-safe. ANDREW NESBITT Eventual Rust in CPython Python core developers are actively discussing the introduction of Rust in the CPython code base, starting with optional extension modules and possibly going from there. This post covers the discussion and pros and cons of the idea. DAROC ALDEN Fast Container Builds: 202 - Check out the Deep Dive This blog explores the causes and consequences of slow container builds, with a focus on understanding how BuildKit‚Äôs capabilities support faster container builds ‚Üí DEPOT sponsor How WebSockets Work Understand what WebSockets are, why they were invented, how the handshake works, and where real-time communication truly matters. Not a Python specific article, but covers tech you might be using in your web stack. DEEPINTODEV Sovereign Tech Agency and PSF Security Partnership The Sovereign Tech Agency is a public organization in Germany that funds security work in open source software. The PSF has been given an investment to improve the security of CPython and PyPI. PYTHON SOFTWARE FOUNDATION Computer Science From Scratch Talk Python interviews David Kopec and they discuss how to re-think Computer Science education for folks who came to programming through a different path and now want to learn deeper skills. TALK PYTHON A First Look at Django’s New Background Tasks Django 6.0 introduces a built-in background tasks framework in django.tasks. But don’t expect to phase out Celery, Huey or other preferred solutions just yet. ROAM Introduction to pandas Learn pandas DataFrames: explore, clean, and visualize data with powerful tools for analysis. Delete unneeded data, import data from a CSV file, and more. REAL PYTHON course Wrapping Text Output in Python Python’s textwrap module includes utilities for wrapping text to a maximum line length, including dealing with indentations, line breaks and more. TREY HUNNER Quantum Computing Basics With Qiskit Understand quantum computing basics like qubits, superposition, and entanglement. Then use Python Qiskit to build your first quantum circuit. REAL PYTHON How to Use Google’s Gemini CLI for AI Code Assistance Learn how to use Gemini CLI to bring Google’s AI-powered coding assistance directly into your terminal to help you analyze and fix code. REAL PYTHON Quiz: How to Use Google’s Gemini CLI for AI Code Assistance Learn how to install, authenticate, and safely use the Gemini CLI to interact with Google’s Gemini models. REAL PYTHON Projects & Code flask-pydantic: Flask Extension for Pydantic GITHUB.COM/PALLETS-ECO modraw: Drawing Utils From Tldraw for Marimo GITHUB.COM/KOANING browsr: File Explorer in Your Terminal GITHUB.COM/JUFTIN deptry: Find Unused and Missing Dependencies GITHUB.COM/FPGMAAS boa-restrictor: A Python and Django Linting Library GITHUB.COM/AMBIENT-INNOVATION Events Weekly Real Python Office Hours Q&A (Virtual) December 10, 2025 REALPYTHON.COM Python Atlanta December 12, 2025 MEETUP.COM PyDelhi User Group Meetup December 13, 2025 MEETUP.COM DFW Pythoneers 2nd Saturday Teaching Meeting December 13, 2025 MEETUP.COM DjangoCologne December 16, 2025 MEETUP.COM Happy Pythoning!This was PyCoder’s Weekly Issue #712.View in Browser ¬ª [ Subscribe to üêç PyCoder’s Weekly üíå ‚Äì Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

09.12.2025 19:30:00

Informační Technologie
2 dny

We’re excited to invite you to this year’s General Assembly meeting! We’ll gather on Wednesday, 17 December 2025 20:00 CET, held online via Zoom. EPS membership is required to participate and additional joining instructions will be shared closer to the date.You can find more details about the agenda of the meeting, as it is defined in our bylaws here:  https://www.europython-society.org/bylaws/ (Article 8).One of the items on the Agenda is electing the new Board.What does the Board do?The Board consists of a chairperson, a vice chairperson and 2-7 other board members. The Board carries the Society’s legal and fiscal responsibility, but in practice the largest part of the workload revolves around one thing: EuroPython conference organisation.Board members currently handle substantial parts of the planning, decision-making, coordination, and operational oversight of the conference. This requires:Understanding how the conference is structured and runBeing able to work with volunteer teams and external partnersManaging recurring issues around finances, logistics, and continuityBeyond the conference, the Board also oversees membership, budgets, grants, infrastructure, and long-term planning and sustainability (including hiring an event manager, selecting future locations, strengthening outreach, managing trademarks, legal compliance, and many more).Furthermore, specifically for 2026:Hiring the second part-time Event Manager in the EP2026 location.Finaid and reimbursements restructuringBuilding and coordinating the EP2026 Team.Time CommitmentServing on the Board is a volunteer role, and it does take a steady amount of time each week. There’s a 1.5-hour meeting every two weeks in the evening CE(S)T, plus a few hours of ongoing async work. Around conference season, things naturally get a bit busier than that.If a member can’t commit that time, their tasks fall to others, so thinking carefully about your availability is really important.Who Should Consider Running?Working on the board means making decisions about various aspects of the conference. Therefore having experience being on previous EuroPython teams is necessary. Also, you will need to:Dedicate consistent weekly timeBe willing to learn how the Society and the conference operateIt’s great if you can also bring some experience from other non-profits, community organising, or event work (helpful, but not mandatory)How to Nominate YourselfEmail your nomination to board@europython.eu before 10 December 2025. In your nomination statement, please focus on your EuroPython experience - what you’ve already helped move forward or complete, and what you hope to work on in the next Board term. We will publish the list of candidates on 12 December 2025.During the General Assembly, you will have the opportunity to introduce yourself and share with our members why you believe they should vote for you. Each candidate will typically be given one minute to present themselves before members cast their votes.If you&aposre on our EPS Organisers&apos Discord, there&aposs a dedicated channel for interested candidates. Please ask in the general channel, and we’ll be happy to add you.It sounds a lot, I want to help, but I can’t commit to thatThat’s completely understandable! Serving on the Board comes with significant responsibilities, time commitments, and administrative tasks. If that’s not the right fit for you, but you’re still interested in supporting us, we’d love your help! There are many other ways to get involved. We have several workgroups (see 2025 Teams Description document, as an example) that work on conference preparations during the months leading up to the event, and we also need volunteers to assist onsite during the conference.

09.12.2025 14:39:32

Informační Technologie
2 dny

The Online Community Working Group has introduced a new GitHub repository designed to manage and track ideas, suggestions, and improvements across Django's various online community platforms. Introducing the Online Community Working Group Repository Primarily inspired by the rollout of the New Features repository, the Online Community Working Group has launched their own version that works in conjunction with the Online Community Working Group Ideas GitHub project to provide a mechanism to gather feedback, suggestions, and ideas from across the online community and track their progression. The primary aim is to help better align Django's presence across multiple online platforms by providing: Centralisation: A community-platform-agnostic place to collect feedback, suggestions, and ideas from members of any of Django's online communities. Visibility: With a variety of platforms in use across the community, some of which require an account before their content can even be read, discussions can happen in what effectively amount to private silos. This centralised repository allows all suggestions and ideas to be viewed by everybody, regardless of their community platform of choice. Consistency: A suggestion for one platform can often be a good idea for another. Issues and ideas raised centrally can be assessed against all platforms to better align Django's online community experience. How to use the Online Community Working Group Repo If you have an idea or a suggestion for any of Django's online community platforms (such as the Forum, Discord, or elsewhere), the process starts by creating an issue in the new repository. You'll be asked to summarise the idea, and answer a couple of short questions regarding which platform it applies to and the rationale behind your idea. The suggestion will be visible on the public board, and people will be able to react to the idea with emoji responses as a quick measure of support, or provide longer-form answers as comments on the issue. The Online Community Working Group will review, triage, and respond to all suggestions, before deciding whether or how they can be implemented across the community. Existing Online Communities Note that we're not asking that you stop using any mechanisms in place within the particular community you're a part of currently—the Discord #suggestions channel is not going away, for example. However, we may ask that a suggestion or idea flagged within a particular platform be raised via this new GitHub repo instead, in order increase its visibility, apply it to multiple communities, or simply better track its resolution. Conclusion The Online Community Working Group was relatively recently set up, with the aim of improving the experience for members of all Django's communities online. This new repository takes a first step in that direction. Check out the repository at django/online-community-working-group on GitHub to learn more and start helping shape Django's truly excellent community presence online.

09.12.2025 14:00:47

Informační Technologie
2 dny

We‚Äôre excited to announce that PyCharm 2025.3 is here! This release continues our mission to make PyCharm the most powerful Python IDE for web, data, and AI/ML development. It marks the migration of Community users to the unified PyCharm and brings full support for Jupyter notebooks in remote development, uv as the default environment manager, proactive data exploration, new LSP tools support, the introduction of Claude Agent, and over 300 bug fixes. Download now Community user migration to the unified PyCharm As announced earlier, PyCharm 2025.2 was the last major release of the Community Edition. With PyCharm 2025.3, we‚Äôre introducing a smooth migration path for Community users to the unified PyCharm. The unified version brings everything together in a single product ‚Äì Community users can continue using PyCharm for free and now also benefit from built-in Jupyter support. With a one-click option to start a free Pro trial, it‚Äôs easier than ever to explore PyCharm‚Äôs advanced features for data science, AI/ML, and web development. Learn more in the full What‚Äôs New post ‚Üí Jupyter notebooks Jupyter notebooks are now fully supported in remote development. You can open, edit, and run notebooks directly on a remote machine without copying them to your local environment. The Variables tool window also received sorting options, letting you organize notebook variables by name or type for easier data exploration. Read more about Jupyter improvements ‚Üí uv now the default for new projects When uv is detected on your system, PyCharm now automatically suggests it as the default environment manager in the New Project wizard. For projects managed by uv, uv run is also used as the default command for your run configurations. Proactive data exploration Pro PyCharm now automatically analyzes your pandas DataFrames to detect the most common data quality issues. If any are found, you can review them and use Fix with AI to generate cleanup code automatically. The analysis runs quietly in the background to keep your workflow smooth and uninterrupted. Support for new LSP tools PyCharm 2025.3 expands its LSP integration with support for Ruff, ty, Pyright, and Pyrefly. These bring advanced formatting, type checking, and inline type hints directly into your workflow. More on LSP tools. AI features Multi-agent experience: Junie and Claude Agent Work with your preferred AI agent from a single chat: Junie by JetBrains and Claude Agent can now be used directly in the AI interface.  Claude Agent is the first third-party AI agent natively integrated into JetBrains IDEs. Bring Your Own Key (BYOK) is coming soon to JetBrains AI BYOK will let you connect your own API keys from OpenAI, Anthropic, or any OpenAI API-compatible local model, giving you more flexibility and control over how you use AI in JetBrains IDEs. Read more Transparent in-IDE AI quota tracking  Monitoring and managing your AI resources just got a lot easier, as you can now view your remaining AI Credits, renewal date, and top-up balance directly inside PyCharm. UIX changes Islands theme The new Islands theme is now the default for all users, offering improved contrast, balanced layouts, and a softer look in both dark and light modes. New Welcome screen We‚Äôve introduced a new non-modal Welcome screen that keeps your most common actions within reach and provides a smoother start to your workflow. Looking for more? Visit our What‚Äôs New page to learn about all 2025.3 features and bug fixes. Read the release notes for the full breakdown of the changes. If you encounter any problems, please report them via our issue tracker so we can address them promptly. We‚Äôd love to hear your feedback on PyCharm 2025.3 ‚Äì leave your comments below or connect with us on X and BlueSky.

09.12.2025 10:40:55

Informační Technologie
2 dny

<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://discuss.python.org/t/pep-798-unpacking-in-comprehensions/99435?featured_on=pythonbytes">PEP 798: Unpacking in Comprehensions</a></strong></li> <li><strong><a href="https://github.com/pandas-dev/pandas/releases/tag/v3.0.0rc0?featured_on=pythonbytes">Pandas 3.0.0rc0</a></strong></li> <li><strong><a href="https://github.com/crate-ci/typos?featured_on=pythonbytes">typos</a></strong></li> <li><strong>A couple testing topics</strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=KOzOETk4Xtw' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="461">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy">@mkennedy@fosstodon.org</a> / <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes">@mkennedy.codes</a> (bsky)</li> <li>Brian: <a href="https://fosstodon.org/@brianokken">@brianokken@fosstodon.org</a> / <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes">@brianokken.bsky.social</a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes">@pythonbytes@fosstodon.org</a> / <a href="https://bsky.app/profile/pythonbytes.fm">@pythonbytes.fm</a> (bsky)</li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1: <a href="https://discuss.python.org/t/pep-798-unpacking-in-comprehensions/99435?featured_on=pythonbytes">PEP 798: Unpacking in Comprehensions</a></strong></p> <ul> <li>After careful deliberation, the Python Steering Council is pleased to accept PEP 798 – Unpacking in Comprehensions.</li> <li><p>Examples</p> <div class="codehilite"> <pre><span></span><code>[*it for it in its] # list with the concatenation of iterables in &#39;its&#39; {*it for it in its} # set with the union of iterables in &#39;its&#39; {**d for d in dicts} # dict with the combination of dicts in &#39;dicts&#39; (*it for it in its) # generator of the concatenation of iterables in &#39;its&#39; </code></pre> </div></li> <li><p>Also: <a href="https://discuss.python.org/t/pep-810-explicit-lazy-imports/104131/465?featured_on=pythonbytes">The Steering Council is happy to unanimously accept “PEP 810, Explicit lazy imports”</a></p></li> </ul> <p><strong>Brian #2: <a href="https://github.com/pandas-dev/pandas/releases/tag/v3.0.0rc0?featured_on=pythonbytes">Pandas 3.0.0rc0</a></strong></p> <ul> <li>Pandas 3.0.0 will be released soon, and we’re on Release candidate 0</li> <li>Here’s <a href="https://pandas.pydata.org/docs/dev/whatsnew/v3.0.0.html?featured_on=pythonbytes">What’s new in Pands 3.0.0</a> <ul> <li>Dedicated string data type by default <ul> <li>Inferred by default for string data (instead of object dtype)</li> <li>The str dtype can only hold strings (or missing values), in contrast to object dtype. (setitem with non string fails)</li> <li>The missing value sentinel is always NaN (np.nan) and follows the same missing value semantics as the other default dtypes.</li> </ul></li> <li>Copy-on-Write <ul> <li>The result of <em>any</em> indexing operation (subsetting a DataFrame or Series in any way, i.e. including accessing a DataFrame column as a Series) or any method returning a new DataFrame or Series, always <em>behaves as if</em> it were a copy in terms of user API.</li> <li>As a consequence, if you want to modify an object (DataFrame or Series), the only way to do this is to directly modify that object itself.</li> </ul></li> <li>pd.col syntax can now be used in DataFrame.assign() and DataFrame.loc() <ul> <li>You can now do this: <code>df.assign(c = pd.col('a') + pd.col('b'))</code></li> </ul></li> <li>New Deprecation Policy</li> <li><a href="https://pandas.pydata.org/docs/dev/whatsnew/v3.0.0.html#other-enhancements">Plus more</a></li> </ul></li> - </ul> <p><strong>Michael #3: <a href="https://github.com/crate-ci/typos?featured_on=pythonbytes">typos</a></strong></p> <ul> <li>You’ve heard about codespell … what about <a href="https://github.com/crate-ci/typos?featured_on=pythonbytes">typos</a>?</li> <li><a href="https://marketplace.visualstudio.com/items?itemName=tekumara.typos-vscode&featured_on=pythonbytes">VSCode extension</a> and <a href="https://open-vsx.org/extension/tekumara/typos-vscode?featured_on=pythonbytes">OpenVSX extension</a>.</li> <li>From Sky Kasko:</li> </ul> <p><em>Like codespell, typos checks for known misspellings instead of only allowing words from a dictionary. But typos has some extra features I really appreciate, like finding spelling mistakes inside snake_case or camelCase words. For example, if you have the line:</em></p> <div class="codehilite"> <pre><span></span><code>*connecton_string<span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">&quot;sqlite:///my.db&quot;</span>* </code></pre> </div> <p><em>codespell won't find the misspelling, but typos will. It gave me the output:</em></p> <div class="codehilite"> <pre><span></span><code>*error:<span class="w"> </span><span class="sb">`</span>connecton<span class="sb">`</span><span class="w"> </span>should<span class="w"> </span>be<span class="w"> </span><span class="sb">`</span>connection<span class="sb">`</span>,<span class="w"> </span><span class="sb">`</span>connector<span class="sb">`</span><span class="w"> </span> ╭▸<span class="w"> </span>./main.py:1:1<span class="w"> </span>│1<span class="w"> </span>│<span class="w"> </span><span class="nv">connecton_string</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s2">&quot;sqlite:///my.db&quot;</span><span class="w"> </span> ╰╴━━━━━━━━━* </code></pre> </div> <p><em>But the main advantage for me is that typos <a href="https://github.com/tekumara/typos-lsp?featured_on=pythonbytes">has an LSP</a> that supports editor integrations like a <a href="https://marketplace.visualstudio.com/items?itemName=tekumara.typos-vscode&featured_on=pythonbytes">VS Code extension</a>. As far as I can tell, codespell <a href="https://github.com/codespell-project/codespell/issues/1203?featured_on=pythonbytes">doesn't support editor integration</a>. (Note that the popular <a href="https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker&featured_on=pythonbytes">Code Spell Checker</a> VS Code extension is an unrelated project that uses a traditional dictionary approach.)</em></p> <p><em>For more on the differences between codespell and typos, here's a comparison table I found in the typos repo: https://github.com/crate-ci/typos/blob/master/docs/comparison.md</em></p> <p><em>By the way, though it's not mentioned in the <a href="https://github.com/crate-ci/typos#install">installation instructions</a>, typos is <a href="https://pypi.org/project/typos/?featured_on=pythonbytes">published on PyPI</a> and can be installed with uv tool install typos, for example. That said, I don't bother installing it, I just use the VS Code extension and run it as a <a href="https://github.com/crate-ci/typos/blob/master/docs/pre-commit.md?featured_on=pythonbytes">pre-commit hook</a>. (By the way, I'm using <a href="https://prek.j178.dev/?featured_on=pythonbytes">prek</a> instead of pre-commit now; thanks for the tip on episode #448!) It looks like typos also publishes a <a href="https://github.com/crate-ci/typos/blob/master/docs/github-action.md?featured_on=pythonbytes">GitHub action</a>, though I haven't used it.</em></p> <p><strong>Brian #4: A couple testing topics</strong></p> <ul> <li><a href="https://github.com/pablogsal/slowlify?featured_on=pythonbytes">slowlify</a> <ul> <li>suggested by Brian Skinn</li> <li>Simulate slow, overloaded, or resource-constrained machines to reproduce CI failures and hunt flaky tests.</li> <li>Requires Linux with cgroups v2</li> </ul></li> <li><a href="https://nedbatchelder.com/blog/202511/why_your_mock_breaks_later.html?featured_on=pythonbytes">Why your mock breaks later</a> <ul> <li>Ned Badthelder</li> <li>Ned’s taught us before to “Mock where the object is used, not where it’s defined.”</li> <li>To be more explicit, but probably more confusing to mock-newbies, “don’t mock things that get imported, mock the object in the file it got imported to.” <ul> <li>See? That’s probably worse. Anyway, read Ned’s post.</li> </ul></li> <li>If my project <code>myproduct</code> has user.py that uses the system builtin <code>open()</code> and we want to patch it: <ul> <li>DONT DO THIS: <code>@patch("builtins.open")</code> <ul> <li>This patches <code>open()</code> for the whole system</li> </ul></li> <li>DO THIS: <code>@patch("myproduct.user.open")</code> <ul> <li>This patches <code>open()</code> for just the user.py file, which is what we want</li> </ul></li> </ul></li> <li>Apparently this issue <a href="#">is common</a> and is mucking up using <code>coverage.py</code></li> </ul></li> </ul> <p><strong>Extras</strong></p> <p>Brian:</p> <ul> <li><a href="https://www.youtube.com/watch?v=mpR8ngthqiE">The Rise and Rise of FastAPI - mini documentary</a></li> <li><a href="https://courses.pythontest.com/lean-tdd/?featured_on=pythonbytes">“Building on Lean” chapter of LeanTDD is out</a> <ul> <li>The next chapter I’m working on is “Finding Waste in TDD”</li> <li>Notes to delete before end of show: <ul> <li>I’m not on track for an end of year completion of the first pass, so pushing goal to 1/31/26</li> <li>As requested by a reader, I’m releasing both the full-so-far versions and most-recent-chapter</li> </ul></li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://vanishinggradients.fireside.fm/64?featured_on=pythonbytes">My Vanishing Gradient’s episode</a> is out</li> <li><a href="https://www.djangoproject.com/weblog/2025/dec/03/django-60-released/?featured_on=pythonbytes">Django 6 is out</a></li> </ul> <p><strong>Joke:</strong> <a href="https://github.com/thesephist/tabloid?featured_on=pythonbytes">tabloid</a> - A minimal programming language inspired by clickbait headlines</p>

09.12.2025 08:00:00

Informační Technologie
2 dny

Elon Musk is not happy with the EU fining his X platform and is currently on a tweet rampage complaining about it. Among other things, he wants the whole EU to be abolished. He sadly is hardly the first wealthy American to share their opinions on European politics lately. I’m not a fan of this outside attention but I believe it’s noteworthy and something to pay attention to. In particular because the idea of destroying and ripping apart the EU is not just popular in the US; it’s popular over here too. Something that greatly concerns me. We Have Genuine Problems There is definitely a bunch of stuff we might want to fix over here. I have complained about our culture before. Unfortunately, I happen to think that our challenges are not coming from politicians or civil servants, but from us, the people. Europeans don’t like to take risks and are quite pessimistic about the future compared to their US counterparts. Additionally, we Europeans have been trained to feel a lot of guilt over the years, which makes us hesitant to stand up for ourselves. This has led to all kinds of interesting counter-cultural movements in Europe, like years of significant support for unregulated immigration and an unhealthy obsession with the idea of degrowth. Today, though, neither seems quite as popular as it once was. Morally these things may be defensible, but in practice they have led to Europe losing its competitive edge and eroding social cohesion. The combination of a strong social state and high taxes in particular does not mix well with the kind of immigration we have seen in the last decade: mostly people escaping wars ending up in low-skilled jobs. That means it’s not unlikely that certain classes of immigrants are going to be net-negative for a very long time, if not forever, and increasingly society is starting to think about what the implications of that might be. Yet even all of that is not where our problems lie, and it’s certainly not our presumed lack of free speech. Any conversation on that topic is foolish because it’s too nuanced. Society clearly wants to place some limits to free speech here, but the same is true in the US. In the US we can currently see a significant push-back against “woke ideologies,” and a lot of that push-back involves restricting freedom of expression through different avenues. America Likes a Weak Europe The US might try to lecture Europe right now on free speech, but what it should be lecturing us on is our economic model. Europe has too much fragmentation, incredibly strict regulation that harms innovation, ineffective capital markets, and a massive dependency on both the United States and China. If the US were to cut us off from their cloud providers, we would not be able to operate anything over here. If China were to stop shipping us chips, we would be in deep trouble too (we have seen this). This is painful because the US is historically a great example when it comes to freedom of information, direct democracy at the state level, and rather low corruption. These are all areas where we’re not faring well, at least not consistently, and we should be lectured. Fundamentally, the US approach to capitalism is about as good as it’s going to get. If there was any doubt that alternative approaches might have worked out better, at this point there’s very little evidence in favor of that. Yet because of increased loss of civil liberties in the US, many Europeans now see everything that the US is doing as bad. A grave mistake. Both China and the US are quite happy with the dependency we have on them and with us falling short of our potential. Europe’s attempt at dealing with the dependency so far has been to regulate and tax US corporations more heavily. That’s not a good strategy. The solution must be to become competitive again so that we can redirect that tax revenue to local companies instead. The Digital Services Act is a good example: we’re punishing Apple and forcing them to open up their platform, but we have no company that can take advantage of that opening. Europe is Europe’s Biggest Problem If you read my blog here, you might remember my musings about the lack of clarity of what a foreigner is in Europe. The reality is that Europe has been deeply integrated for a long time now as a result of how the EU works ‚Äî but still not at the same level as the US. I think this is still the biggest problem. People point to languages as the challenge, but underneath the hood, the countries are still fighting each other. Austria wants to protect its local stores from larger competition in Germany and its carpenters from the cheaper ones coming from Slovenia. You can replace Austria with any other EU country and you will find the same thing. The EU might not be perfect, but it’s hard to imagine that abolishing it would solve any problem given how national states have shown to behave. The moment the EU fell away, we would be warming up all border struggles again. We have already seen similar issues pop up in Northern Ireland after the UK left. And we just have so much bureaucracy, so many non-functioning social systems, and such a tremendous amount of incoming governmental debt to support our flailing pension schemes. We need growth more than any other bloc, and we have such a low probability of actually accomplishing that. Given how the EU is structured, it’s also acting as the punching bag for the failure of the nation states to come to agreements. It’s not that EU bureaucrats are telling Europeans to take in immigrants, to enact chat control or to enact cookie banners or attached plastic caps. Those are all initiatives that come from one or more member states. But the EU in the end will always take the blame because even local politicians that voted in support of some of these things can easily point towards “Brussels” as having created a problem. The United States of Europe A Europe in pieces does not sound appealing to me at all, and that’s because I can look at what China and the US have. What China and the US have that Europe lacks is a strong national identity. Both countries have recognized that strength comes from unity. China in particular is fighting any kind of regionalism tooth and nail. The US has accomplished this through the pledge of allegiance, a civil war, the Department of Education pushing a common narrative in schools, and historically putting post offices and infrastructure everywhere. Europe has none of that. More importantly, Europeans don’t even want it. There is a mistaken belief that we can just become these tiny states again and be fine. If Europe wants to be competitive, it seems unlikely that this can be accomplished without becoming a unified superpower. Yet there is no belief in Europe that this can or should happen, and the other superpowers have little interest in seeing it happen either. What Would Fixing Actually Look Like? If I had to propose something constructive, it would be this: Europe needs to stop pretending it can be 27 different countries with 27 different economic policies while also being a single market. The half-measures are killing us. We have a common currency in the Eurozone but no common fiscal policy. We have freedom of movement but wildly different social systems. We have common regulations but fragmented enforcement. 27 labor laws, 27 different legal systems, tax codes, complex VAT rules and so on. The Draghi report from last year laid out many of these issues quite clearly: Europe needs massive investment in technology and infrastructure. It needs a genuine single market for services, not just goods. It needs capital markets that can actually fund startups at scale. None of this is news to anyone paying attention. But here’s the uncomfortable truth: none of this will happen without Europeans accepting that more integration is the answer, not less. And right now, the political momentum is in the opposite direction. Every country wants the benefits of the EU without the obligations. Every country wants to protect its own industries while accessing everyone else’s markets. One of the arguments against deeper integration is that Europe hinges on some quite unrelated issues. For instance, the EU is seen as non-democratic, but some of the criticism just does not sit right with me. Sure, I too would welcome more democracy in the EU, but at the same time, the system really is not undemocratic today. Take things like chat control: the reason this thing does not die, is because some member states and their elected representatives are pushing for it. What stands in the way is that the member countries and their people don’t actually want to strengthen the EU further. The “lack of democracy” is very much intentional and the exact outcome you get if you want to keep the power with the national states. Foreign Billionaires and European Sovereignty So back to where we started: should the EU be abolished as Musk suggests? I think this is a profoundly unserious proposal from someone who has little understanding of European history and even less interest in learning. The EU exists because two world wars taught Europeans that nationalism without checks leads to catastrophe. It exists because small countries recognized they have more leverage negotiating as a bloc than individually. I also take a lot of issue with the idea that European politics should be driven by foreign interests. Neither Russians nor Americans have any good reason for why they should be having so much interest in European politics. They are not living here; we are. Would Europe be more “free” without the EU? Perhaps in some narrow regulatory sense. But it would also be weaker, more divided, and more susceptible to manipulation by larger powers ‚Äî including the United States. I also find it somewhat rich that American tech billionaires are calling for the dissolution of the EU while they are greatly benefiting from the open market it provides. Their companies extract enormous value from the European market, more than even local companies are able to. The real question isn’t whether Europe should have less regulation or more freedom. It’s whether we Europeans can find the political will to actually complete the project we started. A genuine federation with real fiscal transfers, a common defense policy, and a unified foreign policy would be a superpower. What we have now is a compromise that satisfies nobody and leaves us vulnerable to exactly the kind of pressure Musk and other oligarchs represent. A Different Path Europe doesn’t need fixing in the way the loud present-day critics suggest. It doesn’t need to become more like America or abandon its social model entirely. What it needs is to decide what it actually wants to be. The current state of perpetual ambiguity is unsustainable. It also should not lose its values. Europeans might no longer be quite as hot on the human rights that the EU provides, and they might no longer want to have the same level of immigration. Yet simultaneously, Europeans are presented with a reality that needs all of these things. We’re all highly dependent on movement of labour, and that includes people from abroad. Unfortunately, the wars of the last decade have dominated any migration discourse, and that has created ground for populists to thrive. Any skilled tech migrant is running into the same walls as everyone else, which has made it less and less appealing to come. Or perhaps we’ll continue muddling through, which historically has been Europe’s preferred approach. It’s not inspiring, but it’s also not going to be the catastrophe the internet would have you believe either. Is there reason to be optimistic? On a long enough timeline the graph goes up and to the right. We might be going through some rough patches, but structurally the whole thing here is still pretty solid. And it’s not as if the rest of the world is cruising along smoothly: the US, China, and Russia are each dealing with their own crises. That shouldn’t serve as an excuse, but it does offer context. As bleak as things can feel, we’re not alone in having challenges, but ours are uniquely ours and we will face them. One way or another.

09.12.2025 00:00:00

Informační Technologie
3 dny

A lot happened last month in the world of Python! The core developers pushed ahead on Python 3.15, accepting PEP 810 to bring explicit lazy imports to the language. PyPI tightened account security, Django 6.0 landed with a slew of new features while celebrating twenty years of releases, and the Python Software Foundation (PSF) laid out its financial outlook and kicked off a year-end fundraiser. Let’s dive into the biggest Python news from the past month! Join Now: Click here to join the Real Python Newsletter and you’ll never miss another Python tutorial, course, or news update. Python Releases and PEP Highlights Last month brought forward movement on Python 3.15, with a new alpha release and a major PEP acceptance. Windows users also got an update to the new Python install manager that’s set to replace the traditional installers. Python 3.15.0 Alpha 2 Keeps the Train Moving Python 3.15’s second alpha, 3.15.0a2, arrived on November 19 as part of the language’s regular annual release cadence. It’s an early developer preview that isn’t intended for production, but it shows how 3.15 is shaping up and gives library authors something concrete to test against. Like alpha 1, this release is still relatively small in user-visible features, but it continues the work of: Making UTF-8 the default text encoding for files that don’t specify an encoding, via PEP 686 Providing a dedicated profiling API designed to work better with modern profilers and monitoring tools, via PEP 799 Exposing lower-level C APIs for creating bytes objects more efficiently, via PEP 782 If you maintain packages, now is a good time to start running tests against the alphas in a separate environment so you can catch regressions early. You can always confirm which Python you’re running with python -VV: Shell $ python -VV Python 3.15.0a2 (main, Nov 19 2025, 10:42:00) [GCC ...] Just remember to keep the alpha builds isolated from your everyday projects! PEP 810 Accepted: Explicit Lazy Imports One of the month’s most consequential decisions for the language was the acceptance of PEP 810 – Explicit lazy imports, which you may have read about in last month’s news. The Python Steering Council accepted the proposal on November 3, only a month after its formal creation on October 2. With the PEP moving from Draft to Accepted, it’s now targeted for inclusion in Python 3.15! Note: One of the PEP’s authors, Pablo Galindo Salgado, has been a frequent guest on the Real Python Podcast. PEP 810 introduces new syntax for imports that are evaluated only when first used, rather than at module import time. At a high level, you’ll be able to write: Python lazy import json def parse(): return json.loads(payload) In this example, Python loads the json module only if parse() runs. The goals of explicit lazy imports are to: Improve startup time for large applications with many rarely used imports Break tricky import cycles without resorting to local imports inside functions Give frameworks and tools a clear, explicit way to defer expensive imports Lazy imports are entirely opt-in, meaning that only imports marked as lazy change their behavior. The PEP is also careful to spell out how lazy modules interact with attributes like __all__, exception reporting, and tools such as debuggers. Note: The implementation work is still underway, so you won’t see the new syntax in 3.15.0a2 yet. If you maintain a framework, CLI tool, or large application, it’s worth reading through the PEP and thinking about where lazy imports could simplify your startup path or trim cold-start latency. Python’s New Install Manager Moves Forward on Windows Read the full article at https://realpython.com/python-news-december-2025/ » [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

08.12.2025 14:00:00

Informační Technologie
3 dny

30 years! It‚Äôs hard to believe, but it was in December 1995 (i.e., 30 years ago) that I went freelance, giving up a stable corporate paycheck. And somehow, I‚Äôve managed to make it work: During that time, I‚Äôve gotten married, bought a house, raised three children, gone on numerous vacations, and generally enjoyed a good life. Moreover, I‚Äôm fortunate to really enjoy what I do (i.e., teaching Python and Pandas to people around the world, both via LernerPython.com and via corporate training). And why not? I earn a living from learning new things, then passing that knowledge along to other people in order to help their careers. My students are interesting and smart, and constantly challenge me intellectually. At the same time, I don‚Äôt have the bureaucracy of a university or company; if I have even five meetings in a given month, that‚Äôs a lot. Of course, things haven‚Äôt always been easy. (And frequently, they still aren‚Äôt!) I‚Äôve learned a lot of lessons over the years, many of them the hard way. And so, on this 30th anniversary of my going freelance, I‚Äôm sharing 30 things that I‚Äôve learned. I hope that some or all of these can help, or just encourage, anyone else who is thinking of going this route. Being an excellent programmer isn‚Äôt enough to succeed as a freelancer. You‚Äôre now running a business, which means dealing with accounting, taxes, marketing, sales, product development, and support, along with the actual coding work. These are different skills, all of which take time to learn (or to outsource). Be ready to learn these new skills, and to recognize that in many ways, they are much harder than coding. Consulting means helping people, and if you genuinely enjoy helping others, then it can feel awkward to ask someone to pay you for such help. But if you‚Äôre doing your job right, your help has saved them more than your fee ‚Äì and shouldn‚Äôt you get paid for saving them money? Three skills that will massively help your career are (a) public speaking, (b) writing well, and (3) touch typing. The good news? Anyone can learn to do these. It‚Äôs just a matter of time and effort. There are a lot of brilliant jerks out there, and the only reason people work with them is they feel there isn‚Äôt any alternative. Give them one by demonstrating kindness, patience, and flexibility as often as possible. Attend conferences. Don‚Äôt just attend the talks; meet people in the hallways, at coffee breaks, and at meals, and learn from them. You never know when a chance meeting will give you an insight that will help a client. I‚Äôve met a lot of incredibly nice, smart, interesting people at conferences, and some of those friendships have lasted far beyond our initial short encounter. Running a business means making lots of mistakes. Which means losing lots of money. The goal is to make fewer mistakes over time, and for each successive mistake to cost you less than the previous one. I used to think that the only path to success was having employees. I had a number of employees over the years, some terrific and some less so. But managing takes time, and it‚Äôs not easy. I haven‚Äôt had any employees for several years now, and my income and personal satisfaction are both higher than ever before. Write a newsletter. Or more than one. Yes, a newsletter will help people to find you, learn about what you do, and maybe even buy from you. But writing is a great way to clarify your thoughts and to learn new things. I often use ‚ÄúBetter Developers‚Äù to explore topics in Python that I‚Äôve always wanted to learn in greater depth, often before proposing a conference talk or a new course. I use ‚ÄúBamboo Weekly‚Äù to try parts of Pandas and data analysis that I feel I should know better. And in ‚ÄúTrainer Weekly,‚Äù I reflect on my work as a trainer, thinking through the next steps in running my business.  Be open to changing the direction of your career: I had always done some corporate training, but it took many years to discover that training was its own industry, and that you could just do training. Then I found that it was a better fit for my personality, skills, and schedule. Plus, no one calls you in the middle of the night with bug reports when you‚Äôre a trainer. It‚Äôs better to be an expert in a small, well-defined domain than a generalist. The moment that I started marketing myself as a ‚ÄúPython trainer,‚Äù rather than ‚Äúa consultant who will fix your problems using a variety of open-source tools, but can also teach classes in a number of languages,‚Äù people started to remember me better and reached out. That said, it‚Äôs also important to have a wide body of knowledge. Read anything you can. You never know when it‚Äôll inform what you‚Äôre teaching or doing. I‚Äôm constantly reading newspapers, magazines, newsletters, and books, and it‚Äôs rare for me to finish reading something without finding a connection to my work. Get a good night‚Äôs sleep. I slept far too little for far too long, and regularly got sick. I still seem to need less sleep than most people, but I‚Äôm healthier and calmer when I sleep well. If your work can only survive because you‚Äôre regularly sleeping 4 hours each night, rethink your work. My father used to say, ‚ÄúI never met a con man I didn‚Äôt like.‚Äù And indeed, the clients who failed to pay me were always the sweetest, nicest people‚Ķ until they failed to pay. A contract might have helped in some of these cases, but for the most part, you just need to accept that some proportion of clients will rip you off. (And going to court is far too expensive and time-consuming to be worthwhile.) By contrast, big companies pay, pay on time, and will even remind you when you‚Äôve forgotten to invoice them. Vacations are crucial. Take them, and avoid work while you‚Äôre away. This is yet another advantage of training: Aside from some e-mail exchanges with clients, little or no pressing work needs to happen while you‚Äôre away with family. Companies will often tell you, ‚ÄúThis is our standard contract.‚Äù But there is almost always a way to amend or modify the contract. One company required that I take out car insurance, even though I planned to walk from my hotel to their office, and take an Uber between the airport and my hotel. The company couldn‚Äôt change the part of the contract that required me to get the insurance, but they could add an amendment that for this particular training, this particular time, on condition that I not rent a car, I was exempt from getting auto insurance. You can be serious about your work and yet do it with a dose of humor. I tell jokes when I‚Äôm teaching, and often I‚Äôm the only one laughing at the joke. Which is just fine. The computer industry will have ups and downs. Save during the good times, so that you can weather the bad ones. When things look like they might be going south, think about how you‚Äôll handle the coming year or two. And remember that every downturn ends, often with a sharp upturn ‚Äî so as bad as things might seem, they will almost certainly get better, often in unpredictable ways. About 20 years ago, I tried to found a startup. The ideas were good, and the team was good, but the execution was awful, and while we almost raised some money, we didn‚Äôt quite get there. Our failure was my fault. And I was pretty upset. And yet? In retrospect I‚Äôm happy that it didn‚Äôt happen, because I‚Äôve seen what it means to get an investment. The world needs investors and people with big enough dreams to need venture capital ‚Äì and I‚Äôm glad that I didn‚Äôt end up being one of them. Spend time with your family. I work very hard (probably too hard), but the satisfaction I get from work doesn‚Äôt come close to the satisfaction I get from spending time with my wife and children, or seeing them succeed. You can always do one more thing for work. But the time you spend with your family, especially when your children are little, won‚Äôt last long. Don‚Äôt skimp on retirement savings. Whatever your government allows you to put aside, do it. And then take something from your net income, and invest that, too. We started investing later than we should have, and while we‚Äôll be just fine, it would have been even better had we started years earlier. Take a part of your salary, and put it away on a regular basis. The world can use your help: Whether it‚Äôs by volunteering or donating to charity, you can and should be helping others who are less fortunate than yourself. (And yes, there are many people less fortunate than you, even if you‚Äôre only starting off.) Even a little time, or a little money, can make a difference ‚Äî most obviously to the organization you‚Äôre helping, but also to yourself, making you more aware of the issues in your community, and proud of having helped to solve them. Being in business means being an optimist, believing that you can succeed even when things are tough. (And they‚Äôre often tough!) But you should temper that with realism, ideally with others who are in business for themselves and can offer the skeptical, tough love that is often needed. Along those lines: You, your friends, and your family might love your product. But the only people who matter are your potential customers. Sometimes, a product you love, and which you believe deserves to succeed, won‚Äôt. Which hurts. It‚Äôs bad enough to fail, but it‚Äôs even worse to keep trying, when it‚Äôs clear that the world doesn‚Äôt want what you‚Äôre selling. You‚Äôll have other, better ideas, and the failed product will help to make that next one even better. If you can pay money to save time, do it. Big, famous companies seem faceless, big, and bureaucratic ‚Äî but they‚Äôre run by people, and it‚Äôs those personal relationships that allow things to get done. I‚Äôve taught numerous courses at Fortune 50 companies in which most details were handled via simple e-mail exchanges. As an outside contractor, I‚Äôve found that I encounter less red tape at some companies than many employees do. Learn how to learn new things quickly, and to integrate those new things into what you already know. I spend hours each week reading newsletters and blogs, watching YouTube videos, and chatting with Claude and ChatGPT in order to better understand topics that my students want to know more about. Acquire new skills: Over the last 30 years, I‚Äôve gained the ability to speak Chinese, to solve the New York Times crossword, and to run 10 km in less than one hour. Each of these involved slow, incremental progress over a long time, with inevitable setbacks. Not only have these skills given me a great sense of accomplishment, but they‚Äôve also helped me to empathize with my students, who sometimes fret that they won‚Äôt ever understand Python. I‚Äôve benefitted hugely from the fact that people in the computer industry switch jobs every few years. When a company calls me for the first time about training, it‚Äôs almost inevitably because one of their employees participated in one of my classes at their previous job. Over time, enough people changing employers has been great for my business. This just motivates me more to do a good job, since everyone there is a potential future recommendation. It‚Äôs easy to be jealous of the huge salaries and stock grants that people get when they work for big companies. I might earn less than many of those people, but I work on whatever projects I want, set my own schedule, and have almost no meetings. Plus, I don‚Äôt have to please a boss whose interests aren‚Äôt necessarily aligned with mine. That seems like a pretty good trade-off to me. Not everyone can afford Western-style high prices. That‚Äôs why I offer parity pricing on my LernerPython subscriptions, as well as discounts for students and retirees. I also give away a great deal of content for free, between my newsletters and YouTube channel ‚Äî not only because it‚Äôs good for marketing, but also because I feel strongly that everyone should be able to improve their Python skills, regardless of where they live in the world or what background they come from. Sure, paying clients will get more content and attention, but even people without any resources should be able to get something. Finally: I couldn‚Äôt have made it this far without the help of my family (wife, children, parents, siblings ‚Äî especially my sister), and many friends who gave me support, suggestions, and feedback over the years. Thanks to everyone who has supported me, and allowed me to last this long without a real job! [Note: I also published this on LinkedIn, at https://www.linkedin.com/pulse/30-things-ive-learned-over-years-business-reuven-lerner-rxu4f/?trackingId=SSgKz7QDFlH3oCZp9uVghQ%3D%3D.] The post 30 things I‚Äôve learned from 30 years as a Python freelancer appeared first on Reuven Lerner.

08.12.2025 11:36:27

Informační Technologie
3 dny

Things feel different in tech right now, don’t they? A few years back, landing a dev or data role felt like winning the lottery. You learned some syntax, built a portfolio, and you were set. But in 2025, that safety net feels thin. We all know why. Artificial Intelligence isn’t just a buzzword anymore. It’s sitting right there in your IDE. You might be asking: Is my job safe? Here is the honest answer. If your day-to-day work involves taking a clear set of instructions and turning them into code, your role is shaky. We have tools now that generate boilerplate, write solid SQL, and slap together UI components faster than any human. But here is the good news. The job isn’t disappearing. It’s just moving up a level. The industry is hungry for people who can think, design, and fix messy problems. To survive this shift, you need to stop acting like a translator for computers and start acting like an architect of systems. You need future proof coding skills. The Shift: From “Code Monkey” to Problem Solver I remember my first real wake-up call as a junior dev. I spent three days writing a script to parse some logs. I was so proud of my regex. Then, a senior engineer looked at it, shook his head, and said, “Why didn’t you just fix the logging format at the source?” I was focused on the code. He was focused on the system. That is the difference. AI can write the regex. AI cannot see that the logging format is the actual problem. Here is how you make yourself indispensable in 2025. 1. Think in Systems, Not Just Syntax Most of us learned to code by memorizing rules. “Here is a loop,” or “Here is a class.” But real software engineering is about managing chaos. Take Object-Oriented Programming (OOP). It’s not just about making a class for a “Car” or a “Dog.” It’s a way to map out a complex business problem so it doesn’t collapse under its own weight later. AI can spit out a class file in seconds. But it lacks the vision to plan how twenty different objects should talk to each other over the next two years. Or look at Functional Programming. It sounds academic, but for data roles, it’s vital. It teaches you to write code that doesn’t change things unexpectedly. When you are processing terabytes of data, “side effects” (random changes to data) are a nightmare. Learning to write pure, predictable functions keeps your data pipelines from exploding. 2. Don’t Wait for a Ticket The average developer waits for work to be assigned. The indispensable developer goes hunting for it. Every company is full of waste. The marketing team manually fixing a spreadsheet every Monday. The operations guy copy-pasting files between folders. This is your chance. You need an automation-first mindset. Learn to write scripts that touch the file system, scrape messy data, and handle errors gracefully. If a network connection drops, a bad script crashes. A good tool waits, retries, logs the issue, and keeps going. AI can write the script if you tell it exactly what to do. But you are the one who has to notice the inefficiency, talk to the marketing manager, and design the tool that actually helps them. 3. Treat Data Like Gold In 2025, data literacy isn’t optional. You need to know your Data Structures. I’m not talking about passing a whiteboard interview. I mean knowing the trade-offs. List vs. Set: If you need to check if an item exists inside a collection a million times, a List will choke your CPU. A Set will do it instantly. Immutability: knowing when to use a Tuple so other developers (and you, six months from now) know this data must not change. These small choices add up. They determine if your application runs smoothly or crawls to a halt. AI often defaults to the simplest option, not the best one. A Gift to Get You Started Talking about these concepts is easy. Doing the work is harder. I want to help you take that first step. I found a resource that covers these exact mechanics—from the basics of variables to the bigger picture of OOP and file handling. It is called the Python Complete Course For Beginners. It’s a solid starting point to build the technical muscle you need to stop just “writing code” and start building systems. I have a coupon that makes it 100% free. These coupons don’t last long, so grab it while you can. Click here to access the free course You can find the link to access the course for free in the bottom of the post. The Bottom Line Don’t let the headlines scare you. The demand for engineers who can solve fuzzy, real-world problems is higher than ever. The code is just a tool. The value is you. Level up your thinking. Master the tools that let you control the machine, rather than compete with it. Stay curious, Boucodes and Naima / 10xdev blog Team

08.12.2025 08:46:51

Informační Technologie
3 dny

Last week urllib3 v2.6.0 was released which contained removals for several APIs that we've known were problematic since 2019 and have been deprecated since 2022. The deprecations were marked in the documentation, changelog, and what I incorrectly believed would be the most meaningful signal to users: with a DeprecationWarning being emitted for each use for the API. The API that urllib3 recommended users use instead has the same features and no compatibility issues between urllib3 1.x and 2.x: resp = urllib3.request("GET", "https://example.com") # Deprecated APIs resp.getheader("Content-Length") resp.getheaders() # Recommended APIs resp.headers.get("Content-Length") resp.headers This API was emitting warnings for over 3 years in a top-3 Python package by downloads urging libraries and users to stop using the API and that was not enough. We still received feedback from users that this removal was unexpected and was breaking dependent libraries. We ended up adding the APIs back and creating a hurried release to fix the issue. It's not clear to me that waiting longer would have helped, either. The libraries that were impacted are actively developed, like the Kubernetes client, Fastly client, and Airflow and I trust that if the message had reached them they would have taken action. My conclusion from this incident is that DeprecationWarning in its current state does not work for deprecating APIs, at least for Python libraries. That is unfortunate, as DeprecationWarning and the warnings module are easy-to-use, language-“blessed”, and explicit without impacting users that don't need to take action due to deprecations. Any other method of deprecating API features is likely to be home-grown and different across each project which is far worse for users and project maintainers. Possible solutions? DeprecationWarning is called out in the “ignored by default” list for Python. I could ask for more Python developers to run with warnings enabled, but solutions in the form of “if only we could all just” are a folly. Maybe the answer is for each library to create its own “deprecation warning” equivalent just to not be in the “ignored by default” list: import warnings class Urllib3DeprecationWarning(UserWarning): pass warnings.warn( "HTTPResponse.getheader() is deprecated", category=Urllib3DeprecationWarning, stacklevel=2 ) Maybe the answer is to do away with advance notice and adopt SemVer with many major versions, similar to how Cryptography operates for API compatibility. Let me know if you have other ideas. Thanks for keeping RSS alive! ♥

08.12.2025 00:00:00

Informační Technologie
4 dny

Remember the pure, unadulterated joy (and occasional rage) of games like Breakout and Arkanoid? Dodging, bouncing, and strategically smashing bricks for that satisfying thwack? Well, get ready for brkrs – a modern, full-featured brick-breaker that brings all that classic arcade action to a new generation, built with cutting-edge Rust 🦀 and the incredibly flexible Bevy game engine! Want to jump straight into the action or peek under the hood? Find everything here: github.com/cleder/brkrs brkrs isn't just another clone; it's a love letter to the genre, packed with modern physics, dynamic levels, and a secret weapon: it's entirely open-source, designed for you to play, tinker, and even contribute! 🚀 The Story: From Retro Dreams to Modern Reality Many of us have dreamed of remaking our favorite classics. For me, that dream was to revive an old Arkanoid-style game, "YaAC 🐧", using today's best game development tools. What started as a manual journey quickly evolved into something much more: a real game that's also a living showcase of modern game dev practices. It’s built on a philosophy of "Kaizen no michi" (改善の道) – making small, continuous improvements. This means the game is always evolving, and every change is carefully considered. 🕹️ Play It Now: Levels That Challenge, Physics That Impress No downloads needed to get a taste of the action! Hit up the web version and start smashing bricks here Sorry at this time its only 2 levels (it is still early in the development process), but 70 more (lifted from YAAC) are coming soon, so stay tuned, or even better, help to make it come true ;-) brkrs extends the classic formula with some seriously cool features: Classic Gameplay, Modern Feel: Paddle, ball, and bricks, but with a polished, satisfying punch. Rich Physics (Rapier3D): Experience accurate and engaging ball physics that make every bounce feel real. Dynamic Levels: Human-readable and easy-to-modify level configurations mean endless possibilities for custom stages. Paddle Rotation: Add a new layer of skill and strategy to your shots. Cross-Platform Fun: Play it on your desktop or directly in your browser thanks to WebAssembly! 🛠️ Go Deeper: A Game for Builders, Too For those who love to dive into the mechanics of their favourite games, brkrs is a treasure trove. It's not just playable; it's also a fantastic example of a well-structured Rust and Bevy project. Want to try building it yourself? You'll need Rust, Cargo, and Git. git clone https://github.com/cleder/brkrs.git cd brkrs cargo run --release Controls: Move the paddle with your mouse, use the scroll wheel to rotate (if enabled), and hit ESC to pause. This is your chance to not just play, but to truly tinker. Ever wanted to add a new power-up? Change how a brick explodes? Or even design your own crazy levels? brkrs makes it approachable. 🧠 Behind the Scenes: Spec-Driven Awesomeness The game's development isn't just chaotic coding; it's built on spec-driven development (SDD). This means every feature starts with a clear, detailed plan, much like a game designer's blueprint. We even use GitHub's spec-kit to formalize these plans. It's a structured way to ensure every piece of the game works exactly as intended, minimizing bugs and maximizing fun. And here's the kicker: this clear, step-by-step approach makes brkrs a perfect playground for experimenting with AI-assisted coding. Imagine using AI to help design a new brick type or tweak game logic – the structured specs make it surprisingly effective! 📣 Help Wanted: Your Skills Can Level Up brkrs! While the code is solid, a great game needs more than just logic! We are actively looking for creative community members to join the effort and help turn brkrs into a visually and aurally stunning experience. This is your chance to get your work into a real, playable, open-source game! 🎧 Sound & Music: We need satisfying sound effects (the thwack of a brick, the clink of a power-up) and engaging background music. 🎨 Art & Textures: Help us create unique brick textures, stylish paddle designs, backgrounds, and other necessary artwork. 📐 Level Design: Got an evil streak? Use the easy-to-modify level configuration files (RON) to create new, challenging, and fun level designs! 🧪 Testing & Feedback: Simply playing the game and reporting bugs or suggesting balance tweaks is incredibly valuable! If you're a designer, artist, musician, or just a gamer with a great eye for detail, reach out or submit a Pull Request with your contributions! 🤝 Join the Fun: Learn, Contribute, Create! brkrs is more than a game; it's a community project following "Seika no Ho" (清華の法), "the way of clear planning." Play the Game: Enjoy the current levels and discover new strategies. Explore the Code: See how modern Rust and Bevy work in a real project. Suggest Ideas: What power-ups or brick types would YOU like to see? Contribute: Even small tweaks or new level designs are welcome! Full documentation, quickstart guides, and developer resources are all available on brkrs.readthedocs.io. Ready to break some bricks and make some waves in game development?

07.12.2025 20:33:55

Informační Technologie
4 dny

Tired of tutorial code that stops working the moment the lesson ends? Meet brkrs—a fully playable, Arkanoid/Breakout-style game written in Rust 🦀 and built with the Bevy engine. But this isn't just a game. It's an open-source learning playground dedicated to spec-first development and AI-assisted coding experiments. Check out the full repository here: github.com/cleder/brkrs As Linus Torvalds famously said: “Talk is cheap. Show me the code.” We say: "Show me the game, the spec, and the code all at once!" 💡 The Philosophy: Spec-First, Incremental, and AI-Ready Game development, especially in a framework like Bevy, can be a steep climb. The brkrs project was born from the desire to take an old idea (an Arkanoid clone) and build it the modern way—a way that accelerates learning and embraces new tooling. We follow a simple, yet powerful, development loop: Spec-First: Every single feature, no matter how small, begins as a clear specification using GitHub's speckit. Incremental PRs: The spec flows through a small, focused issue or Pull Request. This embodies the "Kaizen no michi" (改善の道) philosophy of small, positive, daily changes. Code & Play: The result is working Rust code you can immediately see in the game. This structured approach makes brkrs the perfect sandbox for the AI coding community: Agentic Testing: Need a small, contained task for your coding agent? Point it at a spec and a pending issue. AI-Assisted Feature Dev: Want to see how your favorite LLM handles adding a new brick behavior or adjusting physics? The clear specs provide the perfect prompt. Workflow Learning: Every merged PR is a clean, documented example of how a real-world feature is implemented in Rust/Bevy. What is Spec-Driven Development? The core of our workflow is the use of GitHub's spec-kit. This is a framework for spec-driven development (SDD), an approach where detailed, human-readable specifications are written before any code. SDD serves as the single source of truth for the desired behavior of a feature. By providing clear inputs, outputs, and requirements upfront, it minimizes guesswork, aligns team expectations, and provides a perfect, structured input for any AI coding assistant or agent. 🕹️ Try It Now: Playable & Pluggable You don't need to compile anything to get started! Play the live web version right now! The core experience extends the classic Breakout formula with: Richer Physics (via Rapier3D) constrained to a flat 2D plane. Paddle Rotation and customizable per-level settings. Human-readable Levels that are easy to modify and extend using RON files. 🛠️ Quickstart: Play, Tweak, and Learn Ready to dive into the code? You'll need Rust, Cargo, and Git. git clone https://github.com/cleder/brkrs.git cd brkrs cargo run --release Controls: Move the paddle with the mouse, use the scroll wheel to rotate, and ESC to pause. Now, the fun begins. Want to change the gravity for Level 3? Want to create a new HyperBrick component? The entire architecture—from the Level Loader to the Brick System—is designed for easy modification. Challenge: Following the Samurai principle of "Seika no Ho" (清華の法), "the way of clear planning," pick a small feature, write a mini-spec, and implement it. 🤝 Your Learning Path and Contribution The goal is to make learning modern Rust/Bevy development as enjoyable as playing the game. Here’s how you can engage: Read a Spec: Check out the repo or wiki for a feature you'd like to see. Pick an Issue: Find a small, contained task that aligns with a spec. Experiment with AI: Use your favourite AI tool (e.g., GitHub Copilot, a local agent) to help draft the code for the task. Submit a PR: Show the community how you turned a spec into working Rust code! brkrs is more than just a Breakout clone—it’s a living textbook for best practices in modern, spec-driven, and AI-augmented software development. 🔗 Documentation All the details you need to get started are right here: Full Documentation Quickstart Guide — Get up and running in 10 minutes. Ready to break some bricks and code?

07.12.2025 19:55:11

Informační Technologie
6 dní

I have just released version 0.9.11 of Shed Skin, a restricted-Python-to-C++ compiler. Most importantly, it adds support for Python 3.14. It also adds support for many 3.x features that were not yet implemented, in addition to basic support for the base64 module. It also optimizes a few more common code patterns. Paul Boddie was able to add support for libpcre2, and in the process updated conan to version 2. Thanks to Shakeeb and now Paul, Shed Skin has had first-class Windows support for the last few releases. A new release is often triggered by a nice new example. In this case I found an advanced/educational 3d renderer by Benny Bobaganoosh, and rewrote it from Java to Python. In ~500 lines of code, it renders an .obj file with perspective-correct texture mapping and so on, clipping, lighting.. It becomes about 13 times faster after compilation (in other words, it goes from about 2 to about 30 FPS). For the full list of changes in the release, please see the release notes. Something I have noticed while working on this release is that small object allocations seem to have become faster under Linux, to the degree that programs that would become _slower_ after compilation because of excessive small-object allocation, are now usually _faster_ again, at least on my system. This motivated me to measure the speedup for all 84 example programs at the moment versus cpython 3.13. While it's still all over the place, I was happy to see a median speedup of 12 times, and an average of 20 times. I would very much appreciate more feedback on/assistance with the project. There is always enough low-hanging fruit to help with! See for example the current list of issues for 0.9.12. But just testing random things, finding interesting new example programs, cleaning up parts of the code and such are also much appreciated.

05.12.2025 03:08:21

Informační Technologie
14 dní

NiceGUI is a Python library that allows developers to create interactive web applications with minimal effort. It's intuitive and easy to use. It provides a high-level interface to build modern web-based graphical user interfaces (GUIs) without requiring deep knowledge of web technologies like HTML, CSS, or JavaScript. In this article, you'll learn how to use NiceGUI to develop web apps with Python. You'll begin with an introduction to NiceGUI and its capabilities. Then, you'll learn how to create a simple NiceGUI app in Python and explore the basics of the framework's components. Finally, you'll use NiceGUI to handle events and customize your app's appearance. To get the most out of this tutorial, you should have a basic knowledge of Python. Familiarity with general GUI programming concepts, such as event handling, widgets, and layouts, will also be beneficial. Table of Contents Installing NiceGUI Writing Your First NiceGUI App in Python Exploring NiceGUI Graphical Elements Text Elements Control Elements Data Elements Audiovisual Elements Laying Out Pages in NiceGUI Handling Events and Actions in NiceGUI Conclusion Installing NiceGUI Before using any third-party library like NiceGUI, you must install it in your working environment. Installing NiceGUI is as quick as running the python -m pip install nicegui command in your terminal or command line. This command will install the library from the Python Package Index (PyPI). It's a good practice to use a Python virtual environment to manage dependencies for your project. To create and activate a virtual environment, open a command line or terminal window and run the following commands in your working directory: Windows macOS Linux sh PS> python -m venv .\venv PS> .\venv\Scripts\activate sh $ python -m venv venv/ $ source venv/bin/activate sh $ python3 -m venv venv/ $ source venv/bin/activate The first command will create a folder called venv/ containing a Python virtual environment. The Python version in this environment will match the version you have installed on your system. Once your virtual environment is active, install NiceGUI by running: sh (venv) $ python -m pip install nicegui With this command, you've installed NiceGUI in your active Python virtual environment and are ready to start building applications. Writing Your First NiceGUI App in Python Let's create our first app with NiceGUI and Python. We'll display the traditional "Hello, World!" message in a web browser. To create a minimal NiceGUI app, follow these steps: Import the nicegui module. Create a GUI element. Run the application using the run() method. Create a Python file named app.py and add the following code: python from nicegui import ui ui.label('Hello, World!').classes('text-h1') ui.run() This code defines a web application whose UI consists of a label showing the Hello, World! message. To create the label, we use the ui.label element. The call to ui.run() starts the app. Run the application by executing the following command in your terminal: sh (venv) $ python app.py This will open your default browser, showing a page like the one below: First NiceGUI Application Congratulations! You've just written your first NiceGUI web app using Python. The next step is to explore some features of NiceGUI that will allow you to create fully functional web applications. If the above command doesn't open the app in your browser, then go ahead and navigate to http://localhost:8080. Exploring NiceGUI Graphical Elements NiceGUI elements are the building blocks that we'll arrange to create pages. They represent UI components like buttons, labels, text inputs, and more. The elements are classified into the following categories: Text elements Controls Data elements Audiovisual elements In the following sections, you'll code simple examples showcasing a sample of each category's graphical elements. Text Elements NiceGUI also has a rich set of text elements that allow you to display text in several ways. This set includes some of the following elements: Labels Links Chat messages Markdown containers reStructuredText containers HTML text The following demo app shows how to create some of these text elements: python from nicegui import ui # Text elements ui.label("Label") ui.link("PythonGUIs", "https://pythonguis.com") ui.chat_message("Hello, World!", name="PythonGUIs Chatbot") ui.markdown( """ # Markdown Heading 1 **bold text** *italic text* `code` """ ) ui.restructured_text( """ ========================== reStructuredText Heading 1 ========================== **bold text** *italic text* ``code`` """ ) ui.html("<strong>bold text using HTML tags</strong>") ui.run(title="NiceGUI Text Elements") In this example, we create a simple web interface showcasing various text elements. The page shows several text elements, including a basic label, a hyperlink, a chatbot message, and formatted text using the Markdown and reStructuredText markup languages. Finally, it shows some raw HTML. Each text element allows us to present textual content on the page in a specific way or format, which gives us a lot of flexibility for designing modern web UIs. Run it! Your browser will open with a page that looks like the following. Text Elements Demo App in NiceGUI Control Elements When it comes to control elements, NiceGUI offers a variety of them. As their name suggests, these elements allow us to control how our web UI behaves. Here are some of the most common control elements available in NiceGUI: Buttons Dropdown lists Toggle buttons Radio buttons Checkboxes Sliders Switches Text inputs Text areas Date input The demo app below showcases some of these control elements: python from nicegui import ui # Control elements ui.button("Button") with ui.dropdown_button("Edit", icon="edit", auto_close=True): ui.item("Copy") ui.item("Paste") ui.item("Cut") ui.toggle(["ON", "OFF"], value="ON") ui.radio(["NiceGUI", "PyQt6", "PySide6"], value="NiceGUI").props("inline") ui.checkbox("Enable Feature") ui.slider(min=0, max=100, value=50, step=5) ui.switch("Dark Mode") ui.input("Your Name") ui.number("Age", min=0, max=120, value=25, step=1) ui.date(value="2025-04-11") ui.run(title="NiceGUI Control Elements") In this app, we include several control elements: a button, a dropdown menu with editing options (Copy, Paste, Cut), and a toggle switch between ON and OFF states. We also have a radio button group to choose between GUI frameworks (NiceGUI, PyQt6, PySide6), a checkbox labeled Enable Feature, and a slider to select a numeric value within a range. Further down, we have a switch to toggle Dark Mode, a text input field for entering a name, a number input for providing age, and a date picker. Each of these controls has its own properties and methods that you can tweak to customize your web interfaces using Python and NiceGUI. Note that the elements on this app don't perform any action. Later in this tutorial, you'll learn about events and actions. For now, we're just showcasing some of the available graphical elements of NiceGUI. Run it! You'll get a page that will look something like the following. Text Elements Demo App in NiceGUI Data Elements If you're in the data science field, then you'll be thrilled with the variety of data elements that NiceGUI offers. You'll find elements for some of the following tasks: Representing data in a tabular format Creating plots and charts Building different types of progress charts Displaying 3D objects Using maps Creating tree and log views Presenting and editing text in different formats, including plain text, code, and JSON Here's a quick NiceGUI app where we use a table and a plot to present temperature measurements against time: python from matplotlib import pyplot as plt from nicegui import ui # Data elements time = [1, 2, 3, 4, 5, 6] temperature = [30, 32, 34, 32, 33, 31] columns = [ { "name": "time", "label": "Time (min)", "field": "time", "sortable": True, "align": "right", }, { "name": "temperature", "label": "Temperature (ºC)", "field": "temperature", "required": True, "align": "right", }, ] rows = [ {"temperature": temperature, "time": time} for temperature, time in zip(temperature, time) ] ui.table(columns=columns, rows=rows, row_key="name") with ui.pyplot(figsize=(5, 4)): plt.plot(time, temperature, "-o", color="blue", label="Temperature") plt.title("Temperature vs Time") plt.xlabel("Time (min)") plt.ylabel("Temperature (ºC)") plt.ylim(25, 40) plt.legend() ui.run(title="NiceGUI Data Elements") In this example, we create a web interface that displays a table and a line plot. The data is stored in two lists: one for time (in minutes) and one for temperature (in degrees Celsius). These values are formatted into a table with columns for time and temperature. To render the table, we use the ui.table element. Below the table, we create a Matplotlib plot of temperature versus time and embed it in the ui.pyplot element. The plot has a title, axis labels, and a legend. Run it! You'll get a page that looks something like the following. Data Elements Demo App in NiceGUI Audiovisual Elements NiceGUI also has some elements that allow us to display audiovisual content in our web UIs. The audiovisual content may include some of the following: Images Audio files Videos Icons Avatars Scalable vector graphics (SVG) Below is a small demo app that shows how to add a local image to your NiceGUI-based web application: python from nicegui import ui with ui.image("./otje.jpg"): ui.label("Otje the cat!").classes( "absolute-bottom text-subtitle2 text-center" ) ui.run(title="NiceGUI Audiovisual Elements") In this example, we use the ui.image element to display a local image on your NiceGUI app. The image will show a subtitle at the bottom. NiceGUI elements provide the classes() method, which allows you to apply Tailwind CSS classes to the target element. To learn more about using CSS for styling your NiceGUI apps, check the Styling & Appearance section in the official documentation. Run it! You'll get a page that looks something like the following. Audiovisual Elements Demo App in NiceGUI Laying Out Pages in NiceGUI Laying out a GUI so that every graphical component is in the right place is a fundamental step in any GUI project. NiceGUI offers several elements that allow us to arrange graphical elements to build a nice-looking UI for our web apps. Here are some of the most common layout elements: Cards wrap another element in a frame. Column arranges elements vertically. Row arranges elements horizontally. Grid organizes elements in a grid of rows and columns. List displays a list of elements. Tabs organize elements in dedicated tabs. You'll find several other elements that allow you to tweak how your app's UI looks. Below is a demo app that combines a few of these elements to create a minimal but well-organized user profile form: python from nicegui import ui with ui.card().classes("w-full max-w-3xl mx-auto shadow-lg"): ui.label("Profile Page").classes("text-xl font-bold") with ui.row().classes("w-full"): with ui.card(): ui.image("./profile.png") with ui.card_section(): ui.label("Profile Image").classes("text-center font-bold") ui.button("Change Image", icon="photo_camera") with ui.card().classes("flex-grow"): with ui.column().classes("w-full"): name_input = ui.input( placeholder="Your Name", ).classes("w-full") gender_select = ui.select( ["Male", "Female", "Other"], ).classes("w-full") eye_color_input = ui.input( placeholder="Eye Color", ).classes("w-full") height_input = ui.number( min=0, max=250, value=170, step=1, ).classes("w-full") weight_input = ui.number( min=0, max=500, value=60, step=0.1, ).classes("w-full") with ui.row().classes("justify-end gap-2 q-mt-lg"): ui.button("Reset", icon="refresh").props("outline") ui.button("Save", icon="save").props("color=primary") ui.run(title="NiceGUI Layout Elements") In this app, we create a clean, responsive profile information page using a layout based on the ui.card element. We center the profile form and cap it at a maximum width for better readability on larger screens. We organize the elements into two main sections: A profile image card on the left and a form area on the right. The left section displays a profile picture using the ui.image element with a Change Image button underneath. A series of input fields for personal information, including the name in a ui.input element, the gender in a ui.select element, the eye color in a ui.input element, and the height and weight in ui.number elements. At the bottom of the form, we add two buttons: Reset and Save. We use consistent CSS styling throughout the layout to guarantee proper spacing, shadows, and responsive controls. This ensures that the interface looks professional and works well across different screen sizes. Run it! Here's how the form looks on the browser. A Demo Profile Page Layout in NiceGUI Handling Events and Actions in NiceGUI In NiceGUI, you can handle events like mouse clicks, keystrokes, and similar ones as you can in other GUI frameworks. Elements typically have arguments like on_click and on_change that are the most direct and convenient way to bind events to actions. Here's a quick app that shows how to make a NiceGUI app perform actions in response to events: python from nicegui import ui def on_button_click(): ui.notify("Button was clicked!") def on_checkbox_change(event): state = "checked" if event.value else "unchecked" ui.notify(f"Checkbox is {state}") def on_slider_change(event): ui.notify(f"Slider value: {event.value}") def on_input_change(event): ui.notify(f"Input changed to: {event.value}") ui.label("Event Handling Demo") ui.button("Click Me", on_click=on_button_click) ui.checkbox("Check Me", on_change=on_checkbox_change) ui.slider(min=0, max=10, value=5, on_change=on_slider_change) ui.input("Type something", on_change=on_input_change) ui.run(title="NiceGUI Events & Actions Demo") In this app, we first define four functions we'll use as actions. When we create the control elements, we use the appropriate argument to bind an event to a function. For example, in the ui.button element, we use the on_click argument, which makes the button call the associated function when we click it. We do something similar with the other elements, but use different arguments depending on the element's supported events. You can check the documentation of elements to learn about the specific events they can handle. Using the on_* type of arguments is not the only way to bind events to actions. You can also use the on() method, which allows you to attach event handlers manually. This approach is handy for less common events or when you want to attach multiple handlers. Here's a quick example: python from nicegui import ui def on_click(event): ui.notify(f"Button was clicked!") def on_hover(event): ui.notify(f"Button was hovered!") button = ui.button("Button") button.on("click", on_click) button.on("mouseover", on_hover) ui.run() In this example, we create a small web app with a single button that responds to two different events. When you click the button, the on_click() function triggers a notification. Similarly, when you hover the mouse over the button, the on_hover() function displays a notification. To bind the events to the corresponding function, we use the on() method. The first argument is a string representing the name of the target event. The second argument is the function that we want to run when the event occurs. Conclusion In this tutorial, you've learned the basics of creating web applications with NiceGUI, a powerful Python library for web GUI development. You've explored common elements, layouts, and event handling. This gives you the foundation to build modern and interactive web interfaces. For further exploration and advanced features, refer to the official NiceGUI documentation.

27.11.2025 06:00:00

Investigativní

Komentáře

Kryptoměny a Ekonomika

Sport

Svět

Technologie a věda

Technologie a věda
1 den

It appears that nearly every organization is planning to use artificial intelligence to improve operations. Although autonomous intelligent systems (AIS) can offer significant benefits, they also can be used unethically. The technology can create deepfakes, realistic-looking altered images and videos that help spread misinformation and disinformation. Meanwhile, AI systems trained on biased data can perpetuate discrimination in hiring, lending, and other practices. And surveillance systems that incorporate AI can lead to misidentification.Those issues have led to concerns about AIS trustworthiness, and it has become more crucial for AI developers and companies to ensure the systems they use and sell are ethically sound. To help them, the IEEE Standards Association (IEEE SA) launched its IEEE CertifAIEd ethics program, which offers two certifications: one for individuals and one for products.IEEE CertifAIEd was developed by an interdisciplinary group of AI ethics experts. The program is based on IEEE’s AI ethics framework and methodology, centered around the pillars of accountability, privacy, transparency, and avoiding bias. The program incorporates criteria outlined in the AI ontological specifications released under Creative Commons licenses.IEEE is the only international organization that offers the programs, says Jon Labrador, director for conformity assessment of IEEE SA programs.Assessment program detailsThe professional certification provides individuals with the skills to assess an AIS for adherence to IEEE’s methodology and ethics framework.Those with at least one year of experience in the use of AI tools or systems in their organization’s business processes or work functions are eligible to apply for the certification.You don’t have to be a developer or engineer to benefit from the training, Labrador says. Insurance underwriters, policymakers, human resources personnel, and others could benefit from it, he says.“Professionals from just about any industry or any company that’s using an AI tool to process business transactions are eligible for this program,” he says.The training program covers how to ensure that AI systems are open and understandable; identify and mitigate biases in algorithms; and protect personal data. The curriculum includes use cases. Courses are available in virtual, in-person, or self-study formats.Learners must take a final exam. Once they’ve successfully passed the test, they’ll receive their three-year IEEE professional certification, which is globally recognized, accepted, and respected, Labrador says.“With the certification, you’ll become a trusted source for reviewing AI tools used in your business processes, and you’ll be qualified to run an assessment,” he says. “It would be incumbent on a company to have a few IEEE CertifAIEd professionals to review its tools regularly to make sure they conform with the values identified in our program.”The self-study exam preparatory course is available to IEEE members at US $599; it costs $699 for nonmembers.Product assessmentsThe product certification program assesses whether an organization’s AI tool or AIS conforms to the IEEE framework and continuously aligns with legal and regulatory principles such as the European Union AI Act.An IEEE CertifiAIEd assessor evaluates the product to ensure it meets all criteria. There are more than 300 authorized assessors.Upon completion of the assessment, the company submits it to IEEE Conformity Assessment, which certifies the product and issues the certification mark.“That mark lets customers know that the company has gone through the rigors and is 100 percent in conformance with the latest IEEE AI ethics specifications,” Labrador says.“The IEEE CertifiAIEd program can also be viewed as a risk mitigation tool for companies,” he says, “reducing the risk of system or process failures with the introduction of a new AI tool or system in established business processes.”You can complete an application to begin the process of getting your product certified.

10.12.2025 19:00:02

Technologie a věda
1 den

The power surging through transmission lines over the iconic stone walls of England’s northern countryside is pushing the United Kingdom’s grid to its limits. To the north, Scottish wind farms have doubled their output over the past decade. In the south, where electricity demand is heaviest, electrification and new data centers promise to draw more power, but new generation is falling short. Construction on a new 3,280-megawatt nuclear power plant west of London lags years behind schedule.The result is a lopsided flow of power that’s maxing out transmission corridors from the Highlands to London. That grid strain won’t ease any time soon. New lines linking Scotland to southern England are at least three to four years from operation, and at risk of further delays from fierce local opposition.At the same time, U.K. Prime Minister Keir Starmer is bent on installing even more wind power and slashing fossil-fuel generation by 2030. His Labour government says low-carbon power is cheaper and more secure than natural gas, much of which comes from Norway via the world’s longest underwater gas pipeline and is vulnerable to disruption and sabotage. The lack of transmission lines available to move power flowing south from Scottish wind farms has caused grid congestion in England. To better manage it, the U.K. has installed SmartValves at three substations in northern England—Penwortham, Harker, and Saltholme—and is constructing a fourth at South Shields. Chris Philpot The U.K.’s resulting grid congestion prevents transmission operators from delivering some of their cleanest, cheapest generation to all of the consumers who want it. Congestion is a perennial problem whenever power consumption is on the rise. It pushes circuits to their thermal limits and creates grid stability or security constraints.With congestion relief needed now, the U.K.’s grid operators are getting creative, rapidly tapping new cable designs and innovations in power electronics to squeeze more power through existing transmission corridors. These grid-enhancing technologies, or GETs, present a low-cost way to bridge the gap until new lines can be built.“GETs allow us to operate the system harder before an investment arrives, and they save a s***load of money,” says Julian Leslie, chief engineer and director of strategic energy planning at the National Energy System Operator (NESO), the Warwick-based agency that directs U.K. energy markets and infrastructure. Transmission lines running across England’s countryside are maxed out, creating bottlenecks in the grid that prevent some carbon-free power from reaching customers. Vincent Lowe/Alamy The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment. Such innovation involves some risk, because an intervention anywhere on the U.K.’s tightly meshed power system can have system-wide impacts. (Grid operators elsewhere are choosing to start with GETs at their systems’ periphery—where there’s less impact if something goes wrong.)The question is how far—and how fast—the U.K.’s grid operators can push GETs capabilities. The new technologies still have a limited track record, so operators are cautiously feeling their way toward heavier investment. Power system experts also have unanswered questions about these advanced grid capabilities. For example, will they create more complexity than grid operators can manage in real time? Might feedback between different devices destabilize the grid?There is no consensus yet as to how to even screen for such risks, let alone protect against them, says Robin Preece, professor in future power systems at the University of Manchester, in England. “We’re at the start of establishing that now, but we’re building at the same time. So it’s kind of this race between the necessity to get this technology installed as quickly as possible, and our ability to fully understand what’s happening.”How is the U.K. Managing Grid Congestion?One of the most innovative and high-stakes tricks in the U.K.’s toolbox employs electronic power-flow controllers, devices that shift electricity from jammed circuits to those with spare capacity. These devices have been able to finesse enough additional wind power through grid bottlenecks to replace an entire gas-fired generator. Installed in northern England four years ago by Smart Wires, based in Durham, N.C., these SmartValves are expected to help even more as NESO installs more of them and masters their capabilities.Warwick-based National Grid Electricity Transmission, the grid operator for England and Wales, is adding SmartValves and also replacing several thousand kilometers of overhead wire with advanced conductors that can carry more current. And it’s using a technique called dynamic line rating, whereby sensors and models work together to predict when weather conditions will allow lines to carry extra current.Other kinds of GETs are also being used globally. Advanced conductors are the most widely deployed. Dynamic line rating is increasingly common in European countries, and U.S. utilities are beginning to take it seriously. Europe also leads the world in topology-optimization software, which reconfigures power routes to alleviate congestion, and advanced power-flow-control devices like SmartValves. Engineers install dynamic line rating technology from the Boston-based company LineVision on National Grid’s transmission network. National Grid Electricity Transmission SmartValves’ chops stand out at the Penwortham substation in Lancashire, England, one of two National Grid sites where the device made its U.K. debut in 2021. Penwortham substation is a major transmission hub, whose spokes desperately need congestion relief. Auditory evidence of heavy power flows was clear during my visit to the substation, which buzzes loudly. The sound is due to the electromechanical stresses on the substation’s massive transformers, explains my guide, National Grid commissioned engineer Paul Lloyd.Penwortham’s transformers, circuits, and protective relays are spread over 15 hectares, sandwiched between pastureland and suburban homes near Preston, a small city north of Manchester. Power arrives from the north on two pairs of 400-kilovolt AC lines, and most of it exits southward via 400-kV and 275-kV double-circuit wires. Transmission lines lead to the congested Penwortham substation, which has become a test-bed for GETs such as SmartValves and dynamic line rating. Peter Fairley What makes the substation a strategic test-bed for GETs is its position just north of the U.K. grid’s biggest bottleneck, known as Boundary B7a, which runs east to west across the island. Nine circuits traverse the B7a: the four AC lines headed south from Penwortham, four AC lines closer to Yorkshire’s North Sea coast, and a high-voltage direct-current (HVDC) link offshore. In theory, those circuits can collectively carry 13.6 gigawatts across the B7a. But NESO caps its flow at several gigawatts lower to ensure that no circuits overload if any two lines turn off.Such limits are necessary for grid reliability, but they are leaving terawatt-hours of wind power stranded in Scotland and increasing consumers’ energy costs: an extra £196 million (US $265 million) in 2024 alone. The costs stem from NESO having to ramp up gas-fired generators to meet demand down south while simultaneously compensating wind-farm operators for curtailing their output, as required under U.K. policy.So National Grid keeps tweaking Penwortham. In 2011 the substation got its first big GET: phase-shifting transformers (PSTs), a type of analog flow controller. PSTs adjust power flow by creating an AC waveform whose alternating voltage leads or lags its alternating current. They do so by each PST using a pair of connected transformers to selectively combine power from an AC transmission circuit’s three phases. Motors reposition electrical connections on the transformer coils to adjust flows. Phase-shifting transformers (PSTs) were installed in 2012 at the Penwortham substation and are the analog predecessor to SmartValves. They’re powerful but also bulky and relatively inflexible. It can take 10 minutes or more for the PST’s motorized actuators at Penwortham to tap their full range of flow control, whereas SmartValves can shift within milliseconds.National Grid Electricity Transmission Penwortham’s pair of 540-tonne PSTs occupy the entire south end of the substation, along with their dedicated chillers, relays, and power supplies. Delivering all that hardware required extensive road closures and floating a huge barge up the adjacent River Ribble, an event that made national news.The SmartValves at Penwortham stand in stark contrast to the PSTs’ heft, complexity, and mechanics. SmartValves are a type of static synchronous series compensator, or SSSC—a solid-state alternative to PSTs that employs power electronics to tweak power flows in milliseconds. I saw two sets of them tucked into a corner of the substation, occupying a quarter of the area of the PSTs. The SmartValve V103 design [above] experienced some teething and reliability issues that were ironed out with the technology’s next iteration, the V104. National Grid Electricity Transmission/Smart Wires The SmartValves are first and foremost an insurance policy to guard against a potentially crippling event: the sudden loss of one of the B7a’s 400-kV lines. If that were to happen, gigawatts of power would instantly seek another route over neighboring lines. And if it happened on a windy day, when lots of power is streaming in from the north, the resulting surge could overload the 275-kV circuits headed from Penwortham to Liverpool. The SmartValves’ job is to save the day.They do this by adding impedance to the 275-kV lines, thus acting to divert more power to the remaining 400-kV lines. This rerouting of power prevents a blackout that could potentially cascade through the grid. The upside to that protection is that NESO can safely schedule an additional 350 MW over the B7a.The savings add up. “That’s 350 MW of wind you’re no longer curtailing from wind farms. So that’s 350 times £100 a megawatt-hour,” says Leslie, at NESO. “That’s also 350 MW of gas-fired power that you don’t need to replace the wind. So that’s 350 times £120 a megawatt-hour. The numbers get big quickly.”Mark Osborne, the National Grid lead asset life-cycle engineer managing its SmartValve projects, estimates the devices are saving U.K. customers over £100 million (US $132 million) a year. At that rate, they’ll pay for themselves “within a few years,” Osborne says. By utility standards, where investments are normally amortized over decades, that’s “almost immediately,” he adds.How Do Grid-Enhancing Technologies Work?The way Smart Wires’ SSSC devices adjust power flow is based on emulating impedance, which is a strange beast created by AC power. An AC flow’s changing magnetic field induces an additional voltage in the line’s conductor, which then acts as a drag on the initial field. Smart Wires’ SSSC devices alter power flow by emulating that natural process, effectively adding or subtracting impedance by adding their own voltage wave to the line. Adding a wave that leads the original voltage wave will boost flow, while adding a lagging wave will reduce flow.The SSSC’s submodules of capacitors and high-speed insulated-gate bipolar transistors operate in sequence to absorb power from a line and synthesize its novel impedance-altering waves. And thanks to its digital controls and switches, the device can within milliseconds flip from maximum power push to maximum pull.You can trace the development of SSSCs to the advent of HVDC transmission in the 1950s. HVDC converters take power from an AC grid and efficiently convert it and transfer it over a DC line to another point in the same grid, or to a neighboring AC grid. In 1985, Narain Hingorani, an HVDC expert at the Palo Alto–based Electric Power Research Institute, showed that similar converters could modulate the flow of an AC line. Four years later, Westinghouse engineer Laszlo Gyugyi proposed SSSCs, which became the basis for Smart Wires’ boxes.Major power-equipment manufacturers tried to commercialize SSSCs in the early 2000s. But utilities had little need for flow control back then because they had plenty of conventional power plants that could meet local demand when transmission lines were full.The picture changed as solar and wind generation exploded and conventional plants began shutting down. In years past, grid operators addressed grid congestion by turning power plants on or off in strategic locations. But as of 2024, the U.K. had shut down all of its coal-fired power plants—save one, which now burns wood—and it has vowed to slash gas-fired generation from about a quarter of electricity supply in 2024 to at most 5 percent in 2030.The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment.To seize the emerging market opportunity presented by changing grid operations, Smart Wires had to make a crucial technology upgrade: ditching transformers. The company’s first SSSC, and those from other suppliers, relied on a transformer to absorb lightning, voltage surges, and every other grid assault that could fry their power electronics. This made them bulky and added cost. So Smart Wires engineers set to work in 2017 to see if they could live without the transformer, says Frank Kreikebaum, Smart Wires’s interim chief of engineering. Two years later the company had assembled a transformerless electronic shield. It consisted of a suite of filters and diverters, along with a control system to activate them. Ditching the transformer produced a trim, standardized product—a modular system-in-a-box.SmartValves work at any voltage and are generally ganged together to achieve a desired level of flow control. They can be delivered fast, and they fit in the kinds of tight spaces that are common in substations. “It’s not about cost, even though we’re competitive there. It’s about ‘how quick’ and ‘can it fit,’” says Kreikebaum.And if the grid’s pinch point shifts? The devices can be quickly moved to another substation. “It’s a Lego-brick build,” says Owen Wilkes, National Grid’s director of network design. Wilkes’s team decides where to add equipment based on today’s best projections, but he appreciates the flexibility to respond to unexpected changes.National Grid’s deployments in 2021 were the highest-voltage installation of SSSCs at the time, and success there is fueling expansion. National Grid now has packs of SmartValves installed at three substations in northern England and under construction at another, with five more installations planned in that area. Smart Wires has also commissioned commercial projects at transmission substations in Australia, Brazil, Colombia, and the United States.Dynamic Line Rating Boosts Grid EfficiencyIn addition to SSSCs, National Grid has deployed lidar that senses sag on Penwortham’s 275-kV lines—an indication that they’re starting to overheat. The sensors are part of a dynamic line rating system and help grid operators maximize the amount of current that high-voltage lines can carry based on near-real-time weather conditions. (Cooler weather means more capacity.) Now the same technology is being deployed across the B7a—a £1 million investment that is projected to save consumers £33 million annually, says Corin Ireland, a National Grid optimization engineer with the task of seizing GETs opportunities.There’s also a lot of old conductor wires being swapped out for those that can carry more power. National Grid’s business plan calls for 2,416 kilometers of such reconductoring over the coming five years, which is about 20 percent of its system. Scotland’s transmission operators are busy with their own big swaps. Scottish wind farms have doubled their power output over the past decade, but it often gets stranded due to grid congestion in England. Andreas Berthold/Alamy But while National Grid and NESO are making some of the boldest deployments of GETs in the world, they’re not fully tapping the technologies’ capabilities. That’s partly due to the conservative nature of power utilities, and partly because grid operators already have plenty to keep their eyes on. It also stems from the unknowns that still surround GETs, like whether they might take the grid in unforeseen directions if allowed to respond automatically, or get stuck in a feedback loop responding to each other. Imagine SmartValve controllers at different substations fighting, with one substation jumping to remove impedance that the other just added, causing fluctuating power flows.“These technologies operate very quickly, but the computers in the control room are still very reliant on people making decisions,” says Ireland. “So there are time scales that we have to take into consideration when planning and operating the network.”This kind of conservative dispatching leaves value on the table. For example, the dynamic line rating models can spit out new line ratings every 15 minutes, but grid operators get updates only every 24 hours. Fewer updates means fewer opportunities to tap the system’s ability to boost capacity. Similarly, for SmartValves, NESO activates installations at only one substation at a time. And control-room operators turn them on manually, even though the devices could automatically respond to faults within milliseconds. National Grid is upgrading transmission lines dating as far back as the 1960s. This includes installing conductors that retain their strength at higher temperatures, allowing them to carry more power. National Grid Electricity Transmission Modeling by Smart Wires and National Grid shows a significant capacity boost across Boundary B7a if Penwortham’s SmartValves were to work in tandem with another set further up the line. For example, when Penwortham is adding impedance to push megawatts off the 275-kV lines, a set closer to Scotland could simultaneously pull the power north, nudging the sum over to the B7a’s eastern circuits. Simulations by Andy Hiorns, a former National Grid planning director who consults for Smart Wires, suggest that this kind of cooperative action should increase the B7a circuits’ usable capacity by another 250 to 300 MW. “You double the effectiveness by using them as pairs,” he says.Operating multiple flow controllers may become necessary for unlocking the next boundary en route to London, south of the B7a, called Boundary B8. As dynamic line rating, beefier conductors, and SmartValves send more power across the B7a, lines traversing B8 are reaching their limits. Eventually, every boundary along the route will have to be upgraded.Meanwhile, back at its U.S. headquarters, Smart Wires is developing other applications for its SSSCs, such as filtering out power oscillations that can destabilize grids and reduce allowable transfers. That capability could be unlocked remotely with firmware.The company is also working on a test program that could turn on pairs of SmartValve installations during slack moments when there isn’t much going on in the control rooms. That would give National Grid and NESO operators an opportunity to observe the impacts, and to get more comfortable with the technology.National Grid and Smart Wires are also hard at work developing industry-first optimization software for coordinating flow-control devices. “It’s possible to extend the technology from how we’re using it today,” says Ireland at National Grid. “That’s the exciting bit.”NESO’s Julian Leslie shares that excitement and says he expects SmartValves to begin working together to ease power through the grid—once the operators have the modeling right and get a little more comfortable with the technology. “It’s a great innovation that has the potential to be really transformational,” he says. “We’re just not quite there yet.”

10.12.2025 14:00:02

Technologie a věda
2 dny

German utility RWE implemented the first known virtual power plant (VPP) in 2008, aggregating nine small hydroelectric plants for a total capacity of 8.6 megawatts. In general, a VPP pulls together many small components—like rooftop solar, home batteries, and smart thermostats—into a single coordinated power system. The system responds to grid needs on demand, whether by making stored energy available or reducing energy consumption by smart devices during peak hours.VPPs had a moment in the mid-2010s, but market conditions and the technology weren’t quite aligned for them to take off. Electricity demand wasn’t high enough, and existing sources—coal, natural gas, nuclear, and renewables—met demand and kept prices stable. Additionally, despite the costs of hardware like solar panels and batteries falling, the software to link and manage these resources lagged behind, and there wasn’t much financial incentive for it to catch up. But times have changed, and less than a decade later, the stars are aligning in VPPs’ favor. They’re hitting a deployment inflection point, and they could play a significant role in meeting energy demand over the next 5 to 10 years in a way that’s faster, cheaper, and greener than other solutions. U.S. Electricity Demand Is GrowingElectricity demand in the United States is expected to grow 25 percent by 2030 due to data center buildouts, electric vehicles, manufacturing, and electrification, according to estimates from technology consultant ICF International.At the same time, a host of bottlenecks are making it hard to expand the grid. There’s a backlog of at least three to five years on new gas turbines. Hundreds of gigawatts of renewables are languishing in interconnection queues, where there’s also a backlog of up to five years. On the delivery side, there’s a transformer shortage that could take up to five years to resolve, and a dearth of transmission lines. This all adds up to a long, slow process to add generation and delivery capacity, and it’s not getting faster anytime soon. “Fueling electric vehicles, electric heat, and data centers solely from traditional approaches would increase rates that are already too high,” says Brad Heavner, the executive director of the California Solar & Storage Association. Enter the vast network of resources that are already active and grid-connected—and the perfect storm of factors that make now the time to scale them. Adel Nasiri, a professor of electrical engineering at the University of South Carolina, says variability of loads from data centers and electric vehicles has increased, as has deployment of grid-scale batteries and storage. There are more distributed energy resources available than there were before, and the last decade has seen advances in grid management using autonomous controls.At the heart of it all, though, is the technology that stores and dispatches electricity on demand: batteries. Advances in Battery TechnologyOver the last 10 years, battery prices have plummeted: the average lithium-ion battery pack price fell from US $715 per kilowatt-hour in 2014 to $115 per kWh in 2024. Their energy density has simultaneously increased thanks to a combination of materials advancements, design optimization of battery cells, and improvements in the packaging of battery systems, says Oliver Gross, a senior fellow in energy storage and electrification at automaker Stellantis.The biggest improvements have come in batteries’ cathodes and electrolytes, with nickel-based cathodes starting to be used about a decade ago. “In many ways, the cathode limits the capacity of the battery, so by unlocking higher capacity cathode materials, we have been able to take advantage of the intrinsic higher capacity of anode materials,” says Greg Less, the director of the University of Michigan’s Battery Lab.Increasing the percentage of nickel in the cathode (relative to other metals) increases energy density because nickel can hold more lithium per gram than materials like cobalt or manganese, exchanging more electrons and participating more fully in the redox reactions that move lithium in and out of the battery. The same goes for silicon, which has become more common in anodes. However, there’s a trade-off: These materials cause more structural instability during the battery’s cycling.The anode and cathode are surrounded by a liquid electrolyte. The electrolyte has to be electrically and chemically stable when exposed to the anode and cathode in order to avoid safety hazards like thermal runaway or fires and rapid degradation. “The real revolution has been the breakthroughs in chemistry to make the electrolyte stable against more reactive cathode materials to get the energy density up,” says Gross. Chemical compound additives—many of them based on sulfur and boron chemistry—for the electrolyte help create stable layers between it and the anode and cathode materials. “They form these protective layers very early in the manufacturing process so that the cell stays stable throughout its life.”These advances have primarily been made on electric vehicle batteries, which differ from grid-scale batteries in that EVs are often parked or idle, while grid batteries are constantly connected and need to be ready to transfer energy. However, Gross says, “the same approaches that got our energy density higher in EVs can also be applied to optimizing grid storage. The materials might be a little different, but the methodologies are the same.” The most popular cathode material for grid storage batteries at the moment is lithium iron phosphate, or LFP.Thanks to these technical gains and dropping costs, a domino effect has been set in motion: The more batteries deployed, the cheaper they become, which fuels more deployment and creates positive feedback loops.Regions that have experienced frequent blackouts—like parts of Texas, California, and Puerto Rico—are a prime market for home batteries. Texas-based Base Power, which raised $1 billion in Series C funding in October, installs batteries at customers’ homes and becomes their retail power provider, charging the batteries when excess wind or solar production makes prices cheap, and then selling that energy back to the grid when demand spikes.There is, however, still room for improvement. For wider adoption, says Nasiri, “the installed battery cost needs to get under $100 per kWh for large VPP deployments.”Improvements in VPP SoftwareThe software infrastructure that once limited VPPs to pilot projects has matured into a robust digital backbone, making it feasible to operate VPPs at grid scale. Advances in AI are key: Many VPPs now use machine learning algorithms to predict load flexibility, solar and battery output, customer behavior, and grid stress events. This improves the dependability of a VPP’s capacity, which was historically a major concern for grid operators. While solar panels have advanced, VPPs have been held back by a lack of similar advancement in the needed software until recently.SunrunCybersecurity and interoperability standards are still evolving. Interconnection processes and data visibility in many areas aren’t consistent, making it hard to monitor and coordinate distributed resources effectively. In short, while the technology and economics for VPPs are firmly in place, there’s work yet to be done aligning regulation, infrastructure, and market design.On top of technical and cost constraints, VPPs have long been held back by regulations that prevented them from participating in energy markets like traditional generators. SolarEdge recently announced enrollment of more than 500 megawatt-hours of residential battery storage in its VPP programs. Tamara Sinensky, the company’s senior manager of grid services, says the biggest hurdle to achieving this milestone wasn’t technical—it was regulatory program design.California’s Demand Side Grid Support (DSGS) program, launched in mid-2022, pays homes, businesses, and VPPs to reduce electricity use or discharge energy during grid emergencies. “We’ve seen a massive increase in our VPP enrollments primarily driven by the DSGS program,” says Sinensky. Similarly, Sunrun’s Northern California VPP delivered 535 megawatts of power from home-based batteries to the grid in July, and saw a 400 percent increase in VPP participation from last year.FERC Order 2222, issued in 2020, requires regional grid operators to allow VPPs to sell power, reduce load, or provide grid services directly to wholesale market operators, and get paid the same market price as a traditional power plant for those services. However, many states and grid regions don’t yet have a process in place to comply with the FERC order. And because utilities profit from grid expansion and not VPP deployment, they’re not incentivized to integrate VPPs into their operations. Utilities “view customer batteries as competition,” says Heavner.According to Nasiri, VPPs would have a meaningful impact on the grid if they achieve a penetration of 2 percent of the market’s peak power. “Larger penetration of up to 5 percent for up to 4 hours is required to have a meaningful capacity impact for grid planning and operation,” he says.In other words, VPP operators have their work cut out for them in continuing to unlock the flexible capacity in homes, businesses, and EVs. Additional technical and policy advances could move VPPs from a niche reliability tool to a key power source and grid stabilizer for the energy tumult ahead.

09.12.2025 17:00:02

Technologie a věda
2 dny

When Dan Heller received his first batch of Dexcom’s latest continuous glucose monitors in early 2023, he decided to run a small experiment: He wore the new biosensor and the previous generation at the same time to see how they compared in measuring his glucose levels. The new, seventh-generation model (aptly called the G7) made by San Diego-based healthcare company Dexcom had just begun shipping in the United States. Dexcom claimed the G7 to be the “most accurate sensor” available to the thousands of people with Type 1 diabetes who use continuous glucose monitors to help manage their blood sugars. But Heller found that its real-world performance wasn’t up to par. In a September 2023 post on his Substack, which is dedicated to covering Type 1 diabetes research and management, he wrote about the experience and predicted an increase in adverse events with the G7, drawing on his past experience leading tech and biotech companies. In the two years since Heller’s experiment, many other users have reported issues with the device. Some complaints regard failed connection and deployment issues, which Dexcom claims to have now addressed. More concerning are reports of erratic, inaccurate readings. A public Facebook group dedicated to sharing negative experiences with the G7 has grown to thousands of users, and several class action lawsuits have been filed against the company, alleging false advertising and misleading claims about device accuracy. Yet, based on a standard metric in the industry, the G7 is one of the most accurate glucose sensors available. “Accuracy in the performance of our device is our number one priority. We understand this is a lifesaving device for people with Type 1 diabetes,” Peter Simpson, Dexcom’s senior vice president of innovation and sensor technology, told IEEE Spectrum. Simpson acknowledged some variability in individual sensors, but stood by the accuracy of the devices.So why have users faced issues? In part, metrics used in marketing can be misleading compared to real world performance. Differences in study design, combined with complex biological realities, mean that the accuracy of these biosensors can’t be boiled down to one number—and users are learning this the hard way. Dexcom’s Glucose MonitorsContinuous glucose monitors (CGMs) typically consist of a small filament inserted under the skin, a transmitter, and a receiver. The filament is coated with an enzyme that generates an electrical signal when it reacts with glucose in the fluid surrounding the body’s cells. That signal is then converted to a digital signal and processed to generate glucose readings every few minutes. Each sensor lasts a week or two before needing to be replaced. The technology has come a long way in recent years. In the 2010s, these devices required blood glucose calibrations twice a day and still weren’t reliable enough to dose insulin based on the readings. Now, some insulin pumps use the near-real-time data to automatically make adjustments. With those improvements has come greater trust in the data users receive—and higher standards. A faulty reading could result in a dangerous dose of insulin. The G7 introduced several changes to Dexcom’s earlier designs, including a much smaller footprint, and updated the algorithm used to translate sensor signals into glucose readings for better accuracy, Simpson says. “From a performance perspective, we did demonstrate in a clinical trial that the G7 is significantly more accurate than the G6,” he says. So Heller and others were surprised when the new Dexcom sensor seemed to be performing worse. For some batches of sensors, it’s possible that the issue was in part due to an unvalidated change in a component used in a resistive layer of the sensors. The new component showed worse performance, according to a warning letter issued by the U.S. Food and Drug Administration in March 2025, following an audit of two U.S. manufacturing sites. The material has since been removed from all G7 sensors, Simpson says, and the company is continuing to work with the FDA to address concerns. (“The warning letter does not restrict Dexcom’s ability to produce, market, manufacture or distribute products, require recall of any products, nor restrict our ability to seek clearance of new products,” Dexcom added in a statement.)“There is a distribution of accuracies that have to do with people’s physiology and also the devices themselves. Even in our clinical studies, we saw some that were really precise and some that had a little bit of inaccuracy to them,” says Simpson. “But in general, our sensor is very accurate.”In late November Abbott—one of Dexcom’s main competitors—recalled some of its CGMs due to inaccurate low glucose readings. The recall affects approximately 3 million sensors and was caused by an issue with one of Abbott’s production lines. The discrepancy between reported accuracy and user experience, however, goes beyond any one company’s manufacturing missteps. Does MARD Matter? The accuracy of CGM systems is frequently measured via “mean absolute relative difference,” or MARD, a percentage that compares the sensor readings to laboratory blood glucose measurements. The lower the MARD, the more accurate the sensor. This number is often used in advertising and marketing, and it has a historical relevance, says Manuel Eichenlaub, a biomedical engineer at the Institute for Diabetes Technology Ulm in Germany, where he and his colleagues conduct independent CGM performance studies. For years, there was a general belief that a MARD under 10 percent meant a system would be accurate enough to be used for insulin dosing. In 2018, the FDA established a specific set of accuracy requirements beyond MARD for insulin-guiding glucose monitors, including Dexcom’s. But manufacturers design the clinical trials that determine accuracy metrics, and the way studies are designed can make a big difference. When Dan Heller wore the Dexcom G6 and G7 at the same time, he says he noticed the G7 readings were more erratic, making it more difficult to properly control his blood sugar. Dan Heller For instance, blood glucose levels serve as the “ground truth to compare the CGM values against,” says Eichenlaub. But glucose levels vary across blood compartments in the body; blood collected from capillaries with a finger prick fluctuates more and can have glucose levels around 5 to 10 percent higher than venous blood. (Dexcom tests against a gold-standard venous blood analyzer. When users see inaccuracies against home meters that use capillary blood, it could in part be a reflection of the meter’s own inaccuracy, Simpson says, though he acknowledges real inaccuracies in CGMs as well.)Additionally, the distribution of sampling isn’t standardized. CGMs are known to be less accurate at the beginning and end of use, or when glucose levels are out of range or changing quickly. That means measured accuracy could be skewed by taking fewer samples right after a meal or late in the CGM’s lifetime. According to Simpson, Dexcom’s trial protocol meets the FDA’s expectation and tests the devices in different blood sugar ranges across the life of the sensor. “Within these clinical trials, we do stress the sensors to try and simulate those real world conditions,” he says. Dexcom and other companies advertise a MARD around 8 percent. But some independent studies are more demanding and find higher numbers; a head-to-head study of three popular CGMs that Eichenlaub led found MARD values closer to 10 percent or higher.Eichenlaub and other CGM experts believe that more standardization of testing and an extension of the FDA requirements are necessary, so they recently proposed comprehensive guidelines on CGM performance testing. In the United States and Europe, a few manufacturers currently dominate the market. But newer players are entering the growing market and, especially in Europe, may not meet the same standards as legacy manufacturers, he says. “Having a standardized way of evaluating the performance of those systems is very important.”For users like Heller though, better accuracy only matters if it yields better diabetes management. “I don’t care about MARD. I want data that is reliably actionable,” Heller says. He encourages engineers working on these devices to think like the patient. “At some point, there’s quantitative data, but you need qualitative data.”

09.12.2025 16:09:28

Technologie a věda
3 dny

If you’ve shopped on Amazon in the past few months, you might have noticed it has gotten easier to find what you’re looking for. Listings now have more images, detailed product names, and better descriptions. The website’s predictive search feature uses the listing updates to anticipate needs and suggests a list of items in real time as you type in the search bar.The improved shopping experience is thanks to Abhishek Agrawal and his Catalog AI system. Launched in July, the tool collects information from across the Internet about products being sold on Amazon and, based on the data, updates listings to make them more detailed and organized.Abhishek AgrawalEmployerAmazon Web Services in SeattleJob titleEngineering leaderMember grade Senior memberAlma maters University of Allahabad in India and the Indian Statistical Institute in KolkataAgrawal is an engineering leader at Amazon Web Services in Seattle. An expert in AI and machine learning, the IEEE senior member worked on Microsoft’s Bing search engine before moving to Amazon. He also developed several features for Microsoft Teams, the company’s direct messaging platform.“I’ve been working in AI for more than 20 years now,” he says. ”Seeing how much we can do with technology still amazes me.”He shares his expertise and passion for the technology as an active member and volunteer at the IEEE Seattle Section. He organizes and hosts career development workshops that teach people to create an AI agent, which can perform tasks autonomously with minimal human oversight.An AI career inspired by a computerAgrawal was born and raised in Chirgaon, a remote village in Uttar Pradesh, India. When he was growing up, no one in Chirgaon had a computer. His family owned a pharmacy, which Agrawal was expected to join after he graduated from high school. Instead, his uncle and older brother encouraged him to attend college and find his own passion.He enjoyed mathematics and physics, and he decided to pursue a bachelor’s degree in statistics at the University of Allahabad. After graduating in 1996, he pursued a master’s degree in statistics, statistical quality control, and operations research at the Indian Statistical Institute in Kolkata.While at the ISI, he saw a computer for the first time in the laboratory of Nikhil R. Pal, an electronics and communication sciences professor. Pal worked on identifying abnormal clumps of cells in mammogram images using the fuzzy c-means model, a data-clustering technique employing a machine learning algorithm.Agrawal earned his master’s degree in 1998. He was so inspired by Pal’s work, he says, that he stayed on at the university to earn a second master’s degree, in computer science.After graduating in 2001, he joined Novell as a senior software engineer working out of its Bengaluru office in India. He helped develop iFolder, a storage platform that allows users across different computers to back up, access, and manage their files.After four years, Agrawal left Novell to join Microsoft as a software design engineer, working at the company’s Hyderabad campus in India. He was part of a team developing a system to upgrade Microsoft’s software from XP to Vista.Two years later, he was transferred to the group developing Bing, a replacement for Microsoft’s Live Search, which had been launched in 2006.Improving Microsoft’s search engineLive Search had a traffic rate of less than 2 percent and struggled to keep up with Google’s faster-paced, more user-friendly system, Agrawal says. He was tasked with improving search results but, Agrawal says, he and his team didn’t have enough user search data to train their machine learning model.Data for location-specific queries, such as nearby coffee shops or restaurants, was especially important, he says.To overcome those challenges, the team used deterministic algorithms to create a more structured search. Such algorithms give the same answers for any query that uses the same specific terms. The process gets results by taking keywords—such as locations, dates, and prices—and finding them on webpages. To help the search engine understand what users need, Agrawal developed a query clarifier that asked them to refine their search. The machine learning tool then ranked the results from most to least relevant.To test new features before they were launched, Agrawal and his team built an online A/B experimentation platform. Controlled tests were completed on different versions of the products, and the platform ran performance and user engagement metrics, then it produced a scorecard to show changes for updated features.Bing launched in 2009 and is now the world’s second-largest search engine, according to Black Raven.Throughout his 10 years of working on the system, Agrawal upgraded it. He also worked with the advertising department to improve Microsoft’s services on Bing. Ads relevant to a person’s search are listed among the search results.“The work seems easy,” Agrawal says, “but behind every search engine are hundreds of engineers powering ads, query formulations, rankings, relevance, and location detection.”Testing products before launch Agrawal was promoted to software development manager in 2010. Five years later he was transferred to Microsoft’s Seattle offices. At the time, the company was deploying new features for existing platforms without first testing them to ensure effectiveness. Instead, they measured their performance after release, Agrawal says, and that was wreaking havoc.He proposed using his online A/B experimentation platform on all Microsoft products, not just Bing. His supervisor approved the idea. In six months Agrawal and his team modified the tool for company-wide use. Thanks to the platform, he says, Microsoft was able to smoothly deploy up-to-date products to users.After another two years, he was promoted to principal engineering manager of Microsoft Teams, which was facing issues with user experience, he says.“Many employees received between 50 and 100 messages a day—which became overwhelming for them,” Agrawal says. To lessen the stress, he led a team that developed the system’s first machine learning feature: Trending. It prioritized the five most important messages users should focus on. Agrawal also led the launch of incorporating emoji reactions, screen sharing, and video calls for Teams.In 2020 he was ready for new experiences, he says, and he left Microsoft to join Amazon as an engineering leader.Improved Amazon shoppingAgrawal led an Amazon team that manually collected information about products from the company’s retail catalog to create a glossary. The data, which included product dimensions, color, and manufacturer, was used to standardize the language found in product descriptions to keep listings more consistent.That is especially important when it comes to third-party sellers, he notes. Sellers listing a product had been entering as much or as little information as they wanted. Agrawal built a system that automatically suggests language from the glossary as the seller types.He also developed an AI algorithm that utilizes the glossary’s terminology to refine search results based on what a user types into the search bar. When a shopper types “red mixer,” for example, the algorithm lists products under the search bar that match the description. The shopper can then click on a product from the list.In 2023 the retailer’s catalog became too large for Agrawal and his team to collect information manually, so they built an AI tool to do it for them. It became the foundation for Amazon’s Catalog AI system.After gathering information about products from around the Web, Catalog AI uses large language models to update Amazon listings with missing information, correct errors, and rewrite titles and product specifications to make them clearer for the customer, Agrawal says.The company expects the AI tool to increase sales this year by US $7.5 billion, according to a Fox News report in July.Finding purpose at IEEESince Agrawal joined IEEE last December, he has been elevated to senior member and has become an active volunteer.“Being part of IEEE has opened doors for collaboration, mentorship, and professional growth,” he says. “IEEE has strengthened both my technical knowledge and my leadership skills, helping me progress in my career.”Agrawal is the social media chair of the IEEE Seattle Section. He is also vice chair of the IEEE Computational Intelligence Society.He was a workshop cochair for the IEEE New Era AI World Leaders Summit, which was held from 5 to 7 December in Seattle. The event brought together government and industry leaders, as well as researchers and innovators working on AI, intelligent devices, unmanned aerial vehicles, and similar technologies. They explored how new tools could be used in cybersecurity, the medical field, and national disaster rescue missions.Agrawal says he stays up to date on cutting-edge technologies by peer-reviewing 15 IEEE journals.“The organization plays a very important role in bringing authenticity to anything that it does,” he says. “If a journal article has the IEEE logo, you can believe that it was thoroughly and diligently reviewed.”

08.12.2025 19:00:03

Technologie a věda
3 dny

I was interviewing a 72-year-old retired accountant who had unplugged his smart glucose monitor. He explained that he “didn’t know who was looking” at his blood sugar data.This wasn’t a man unfamiliar with technology—he had successfully used computers for decades in his career. He was of sound mind. But when it came to his health device, he couldn’t find clear answers about where his data went, who could access it, or how to control it. The instructions were dense, and the privacy settings were buried in multiple menus. So, he made what seemed like the safest choice: he unplugged it. That decision meant giving up real-time glucose monitoring that his doctor had recommended.The healthcare IoT (Internet of Things) market is projected to exceed $289 billion by 2028, with older adults representing a major share of users. These devices are fall detectors, medication reminders, glucose monitors, heart rate trackers, and others that enable independent living. Yet there’s a widening gap between deployment and adoption. According to an AARP survey, 34% of adults over 50 list privacy as a primary barrier to adopting health technology. That represents millions of people who could benefit from monitoring tools but avoid them because they don’t feel safe.In my study at the University of Denver’s Ritchie School of Engineering and Computer Science, I surveyed 22 older adults and conducted in-depth interviews with nine participants who use health-monitoring devices. The findings revealed a critical engineering failure: 82% understood security concepts like two-factor authentication and encryption, yet only 14% felt confident managing their privacy when using these devices. In my research, I also evaluated 28 healthcare apps designed for older adults and found that 79% lacked basic breach-notification protocols.One participant told me, “I know there’s encryption, but I don’t know if it’s really enough to protect my data.” Another said, “The thought of my health data getting into the wrong hands is very concerning. I’m particularly worried about identity theft or my information being used for scams.”This is not a user knowledge problem; it’s an engineering problem. We’ve built systems that demand technical expertise to operate safely, then handed them to people managing complex health needs while navigating age-related changes in vision, cognition, and dexterity.Measuring the GapTo quantify the issues with privacy setting transparency, I developed the Privacy Risk Assessment Framework (PRAF), a tool that scores healthcare apps across five critical domains.First, the regulatory compliance domain evaluates whether apps explicitly state adherence to the Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), or other data protection standards. Just claiming to be compliant is not enough—they must provide verifiable evidence.Second, the security mechanisms domain assesses the implementation of encryption, access controls, and, most critically, breach-notification protocols that alert users when their data may have been compromised. Third, in the usability and accessibility domain, the tool examines whether privacy interfaces are readable and navigable for people with age-related visual or cognitive changes. Fourth, data-minimization practices evaluate whether apps collect only necessary information and clearly specify retention periods. Finally, third-party sharing transparency measures whether users can easily understand who has access to their data and why.When I applied PRAF to 28 healthcare apps commonly used by older adults, the results revealed systemic gaps. Only 25% explicitly stated HIPAA compliance, and just 18% mentioned GDPR compliance. Most alarmingly, 79% lacked breach notification protocols, which means that the users may never find out if their data was compromised. The average privacy policy readability scored at a 12th-grade level, even though research shows that the average reading level of older adults is at an 8th grade level. Not a single app included accessibility accommodations in their privacy interfaces.Consider what happens when an older adult opens a typical health app. They face a multi-page privacy policy full of legal terminology about “data controllers” and “processing purposes,” followed by settings scattered across multiple menus. One participant told me, “The instructions are hard to understand, the print is too small, and it’s overwhelming.” Another explained, “I don’t feel adequately informed about how my data is collected, stored, and shared. It seems like most of these companies are after profit, and they don’t make it easy for users to understand what’s happening with their data.”When protection requires a manual people can’t read, two outcomes follow: they either skip security altogether leaving themselves vulnerable, or abandon the technology entirely, forfeiting its health benefits.Engineering for privacyWe need to treat trust as an engineering specification, not a marketing promise. Based on my research findings and the specific barriers older adults face, three approaches address the root causes of distrust.The first approach is adaptive security defaults. Rather than requiring users to navigate complex configuration menus, devices should ship with pre-configured best practices that automatically adjust to data sensitivity and device type. A fall detection system doesn’t need the same settings as a continuous glucose monitor. This approach draws from the principle of “security by default” in systems engineering.Biometric or voice authentication can replace passwords that are easily forgotten or written down. The key is removing the burden of expertise while maintaining strong protection. As one participant put it: “Simplified security settings, better educational resources, and more intuitive user interfaces will be beneficial.”The second approach is real-time transparency. Users shouldn’t have to dig through settings to see where their data goes. Instead, notification systems should show each data access or sharing event in plain language. For example: “Your doctor accessed your heart-rate data at 2 p.m. to review for your upcoming appointment.” A single dashboard should summarize who has access and why.This addresses a concern that came up repeatedly in my interviews: users want to know who is seeing their data and why. The engineering challenge here isn’t technical complexity, it’s designing interfaces that convey technical realities in language anyone can understand. Such systems already exist in other domains; banking apps, for instance, send immediate notifications for every transaction. The same principle applies to health data, where the stakes are arguably higher.The third approach is invisible security updates. Manual patching creates vulnerability windows. Automatic, seamless updates should be standard for any device handling health data, paired with a simple status indicator so users can confirm protection at a glance. As one participant said, “The biggest issue that we as seniors have is the fact that we don’t remember our passwords... The new technology is surpassing the ability of seniors to keep up with it.” Automating updates removes a significant source of anxiety and risk.What’s at StakeWe can keep building healthcare IoT the way we have: fast, feature-rich, and fundamentally untrustworthy. Or, we can engineer systems that are transparent, secure, and usable by design. Trust isn’t something you market through slogans or legal disclaimers. It’s something you engineer, line by line, into the code itself. For older adults relying on technology to maintain independence, that kind of engineering matters more than any new feature we could add. Every unplugged glucose monitor, every abandoned fall detector, every health app deleted out of confusion or fear represents not just a lost sale but a missed opportunity to support someone’s health and autonomy.The challenge of privacy in healthcare IoT goes beyond fixing existing systems, it requires reimagining how we communicate privacy itself. My ongoing research builds on these findings through an AI-driven Data Helper, a system that uses large language models to translate dense legal privacy policies into short, accurate, and accessible summaries for older adults. By making data practices transparent and comprehension measurable, this approach aims to turn compliance into understanding and trust, thus advancing the next generation of trustworthy digital health systems.

08.12.2025 14:00:02

Technologie a věda
6 dní

Technology evolves rapidly, and innovation is key to business survival, so mentoring young professionals, promoting entrepreneurship, and connecting tech startups to a global network of experts and resources are essential.Some IEEE volunteers do all of the above and more as part of the IEEE Entrepreneurship Ambassador Program.The program was launched in 2018 in IEEE Region 8 (Europe, Middle East, and Africa) thanks to a grant from the IEEE Foundation. The ambassadors organize networking events with industry representatives to help IEEE young professionals and student members achieve their entrepreneurial endeavors and strengthen their technical, interpersonal, and business skills. The ambassadors also organize pitch competitions in their geographic area.The ambassador program launched this year in Region 10 (Asia Pacific).Last year the program was introduced in Region 9 (Latin America) with funding from the Taenzer Memorial Fund. The results of the program’s inaugural year were impressive: 13 ambassadors organized events in Bolivia, Brazil, Colombia, Ecuador, Mexico, Panama, Peru, and Uruguay.“The program is beneficial because it connects entrepreneurs with industry professionals, fosters mentorship, helps young professionals build leadership skills, and creates opportunities for startup sponsorships,” says Susana Lau, vice chair of IEEE Entrepreneurship in Latin America. “The program has also proven successful in attracting IEEE volunteers to serve as ambassadors and helping to support entrepreneurship and startup ventures.”Lau, an IEEE senior member, is a past president of the IEEE Panama Section and an active IEEE Women in Engineering volunteer.A professional development opportunityPeople who participated in the Region 9 program say the experience was life-changing, both personally and professionally.Pedro José Pineda, whose work was recognized with one of the region’s two Top Ambassador Awards, says he’s been able to “expand international collaborations and strengthen the innovation ecosystem in Latin America.“It’s more than an award,” the IEEE member says. “It’s an opportunity to create global impact from local action.”“This remarkable experience has opened new doors for my future career within IEEE, both nationally and globally.”—Vitor PaivaThe region’s other Top Ambassador recipient was Vitor Paiva of Natal, Brazil. He had the opportunity to attend this year’s IEEE Rising Stars in Las Vegas—his first international experience outside Brazil.After participating in the program, the IEEE student member volunteered with its regional marketing committee.“I was proud to showcase Brazil’s IEEE community while connecting with some of IEEE’s most influential leaders,” Paiva, a student at the Universidade Federal do Rio Grande do Norte, says. “This remarkable experience has opened new doors for my future career within IEEE, both nationally and globally.”Expanding the initiativeThe IEEE Foundation says it will invest in the regional programs by funding the grants presented to the winners of the regional pitch competitions, similar to the funding for Region 9. The goal is to hold a worldwide competition, Lau says.The ongoing expansion is a testament to the program’s efforts, says Christopher G. Wright, senior manager of programs and governance at the IEEE Foundation.“I’ve had the pleasure of working on the grants for the IEEE Entrepreneurship Ambassador Program team over the years,” Wright says, “and I am continually impressed by the team’s dedication and the program’s evolution.”To learn more about the program in your region or to apply to become an ambassador, visit the IEEE Entrepreneurship website and search for your region.

05.12.2025 19:00:02

Technologie a věda
6 dní

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications.[ EPFL ]Finally, a good humanoid robot demo!Although having said that, I never trust videos demos where it works really well once, and then just pretty well every other time.[ LimX Dynamics ]Thanks, Jinyan!I understand how these structures work, I really do. But watching something rigid extrude itself from a flexible reel will always seem a little magical.[ AAAS ]Thanks, Kyujin!I’m not sure what “industrial grade” actually means, but I want robots to be “automotive grade,” where they’ll easily operate for six months or a year without any maintenance at all.[ Pudu Robotics ]Thanks, Mandy!When you start to suspect that your robotic EV charging solution costs more than your car.[ Flexiv ]Yeah uh if the application for this humanoid is actually making robot parts with a hammer and anvil, then I’d be impressed.[ EngineAI ]Researchers at Columbia Engineering have designed a robot that can learn a human-like sense of neatness. The researchers taught the system by showing it millions of examples, not teaching it specific instructions. The result is a model that can look at a cluttered tabletop and rearrange scattered objects in an orderly fashion.[ Paper ]Why haven’t we seen this sort of thing in humanoid robotics videos yet?[ HUCEBOT ]While I definitely appreciate in-the-field testing, it’s also worth asking to what extent your robot is actually being challenged by the in-the-field field that you’ve chosen.[ DEEP Robotics ]Introducing HMND 01 Alpha Bipedal — autonomous, adaptive, designed for real-world impact. Built in 5 months, walking stably after 48 hours of training.[ Humanoid ]Unitree says that “this is to validate the overall reliability of the robot” but I really have to wonder how useful this kind of reliability validation actually is.[ Unitree ]This University of Pennsylvania GRASP on Robotics Seminar is by Jie Tan from Google DeepMind, on “Gemini Robotics: Bringing AI into the Physical World.”Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. In this talk, I will present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Furthermore, I will discuss the challenges, learnings and future research directions on robot foundation models.[ University of Pennsylvania GRASP Laboratory ]

05.12.2025 17:30:02

Technologie a věda
7 dní

When people want a clear-eyed take on the state of artificial intelligence and what it all means, they tend to turn to Melanie Mitchell, a computer scientist and a professor at the Santa Fe Institute. Her 2019 book, Artificial Intelligence: A Guide for Thinking Humans, helped define the modern conversation about what today’s AI systems can and can’t do. Melanie MitchellToday at NeurIPS, the year’s biggest gathering of AI professionals, she gave a keynote titled “On the Science of ‘Alien Intelligences’: Evaluating Cognitive Capabilities in Babies, Animals, and AI.” Ahead of the talk, she spoke with IEEE Spectrum about its themes: Why today’s AI systems should be studied more like nonverbal minds, what developmental and comparative psychology can teach AI researchers, and how better experimental methods could reshape the way we measure machine cognition.You use the phrase “alien intelligences” for both AI and biological minds like babies and animals. What do you mean by that?Melanie Mitchell: Hopefully you noticed the quotation marks around “alien intelligences.” I’m quoting from a paper by [the neural network pioneer] Terrence Sejnowski where he talks about ChatGPT as being like a space alien that can communicate with us and seems intelligent. And then there’s another paper by the developmental psychologist Michael Frank who plays on that theme and says, we in developmental psychology study alien intelligences, namely babies. And we have some methods that we think may be helpful in analyzing AI intelligence. So that’s what I’m playing on.When people talk about evaluating intelligence in AI, what kind of intelligence are they trying to measure? Reasoning or abstraction or world modeling or something else?Mitchell: All of the above. People mean different things when they use the word intelligence, and intelligence itself has all these different dimensions, as you say. So, I used the term cognitive capabilities, which is a little bit more specific. I’m looking at how different cognitive capabilities are evaluated in developmental and comparative psychology and trying to apply some principles from those fields to AI.Current Challenges in Evaluating AI CognitionYou say that the field of AI lacks good experimental protocols for evaluating cognition. What does AI evaluation look like today?Mitchell: The typical way to evaluate an AI system is to have some set of benchmarks, and to run your system on those benchmark tasks and report the accuracy. But often it turns out that even though these AI systems we have now are just killing it on benchmarks, they’re surpassing humans, that performance doesn’t often translate to performance in the real world. If an AI system aces the bar exam, that doesn’t mean it’s going to be a good lawyer in the real world. Often the machines are doing well on those particular questions but can’t generalize very well. Also, tests that are designed to assess humans make assumptions that aren’t necessarily relevant or correct for AI systems, about things like how well a system is able to memorize.As a computer scientist, I didn’t get any training in experimental methodology. Doing experiments on AI systems has become a core part of evaluating systems, and most people who came up through computer science haven’t had that training.What do developmental and comparative psychologists know about probing cognition that AI researchers should know too?Mitchell: There’s all kinds of experimental methodology that you learn as a student of psychology, especially in fields like developmental and comparative psychology because those are nonverbal agents. You have to really think creatively to figure out ways to probe them. So they have all kinds of methodologies that involve very careful control experiments, and making lots of variations on stimuli to check for robustness. They look carefully at failure modes, why the system [being tested] might fail, since those failures can give more insight into what’s going on than success.Can you give me a concrete example of what these experimental methods look like in developmental or comparative psychology?Mitchell: One classic example is Clever Hans. There was this horse, Clever Hans, who seemed to be able to do all kinds of arithmetic and counting and other numerical tasks. And the horse would tap out its answer with its hoof. For years, people studied it and said, “I think it’s real. It’s not a hoax.” But then a psychologist came around and said, “I’m going to think really hard about what’s going on and do some control experiments.” And his control experiments were: first, put a blindfold on the horse, and second, put a screen between the horse and the question asker. Turns out if the horse couldn’t see the question asker, it couldn’t do the task. What he found was that the horse was actually perceiving very subtle facial expression cues in the asker to know when to stop tapping. So it’s important to come up with alternative explanations for what’s going on. To be skeptical not only of other people’s research, but maybe even of your own research, your own favorite hypothesis. I don’t think that happens enough in AI.Do you have any case studies from research on babies?Mitchell: I have one case study where babies were claimed to have an innate moral sense. The experiment showed them videos where there was a cartoon character trying to climb up a hill. In one case there was another character that helped them go up the hill, and in the other case there was a character that pushed them down the hill. So there was the helper and the hinderer. And the babies were assessed as to which character they liked better—and they had a couple of ways of doing that—and overwhelmingly they liked the helper character better. [Editor's note: The babies were 6 to 10 months old, and assessment techniques included seeing whether the babies reached for the helper or the hinderer.]But another research group looked very carefully at these videos and found that in all of the helper videos, the climber who was being helped was excited to get to the top of the hill and bounced up and down. And so they said, “Well, what if in the hinderer case we have the climber bounce up and down at the bottom of the hill?” And that completely turned around the results. The babies always chose the one that bounced.Again, coming up with alternatives, even if you have your favorite hypothesis, is the way that we do science. One thing that I’m always a little shocked by in AI is that people use the word skeptic as a negative: “You’re an LLM skeptic.” But our job is to be skeptics, and that should be a compliment.Importance of Replication in AI StudiesBoth those examples illustrate the theme of looking for counter explanations. Are there other big lessons that you think AI researchers should draw from psychology?Mitchell: Well, in science in general the idea of replicating experiments is really important, and also building on other people’s work. But that’s sadly a little bit frowned on in the AI world. If you submit a paper to NeurIPS, for example, where you replicated someone’s work and then you do some incremental thing to understand it, the reviewers will say, “This lacks novelty and it’s incremental.” That’s the kiss of death for your paper. I feel like that should be appreciated more because that’s the way that good science gets done.Going back to measuring cognitive capabilities of AI, there’s lots of talk about how we can measure progress towards AGI. Is that a whole other batch of questions?Mitchell: Well, the term AGI is a little bit nebulous. People define it in different ways. I think it’s hard to measure progress for something that’s not that well defined. And our conception of it keeps changing, partially in response to things that happen in AI. In the old days of AI, people would talk about human-level intelligence and robots being able to do all the physical things that humans do. But people have looked at robotics and said, “Well, okay, it’s not going to get there soon. Let’s just talk about what people call the cognitive side of intelligence,” which I don’t think is really so separable. So I am a bit of an AGI skeptic, if you will, in the best way.

04.12.2025 23:30:02

Technologie a věda
7 dní

The world’s first mass-produced ethanol car, the Fiat 147, motored onto Brazilian roads in 1979. The vehicle crowned decades of experimentation in the country with sugar-cane (and later, corn-based and second-generation sugar-cane waste) ethanol as a homegrown fuel. When Chinese automaker BYD introduced a plug-in hybrid designed for Brazil in October, equipped with a flex-fuel engine that lets drivers choose to run on any ratio of gasoline and ethanol or access plug-in electric power, the move felt like the latest chapter in a long national story.The new engine, designed for the company’s best-selling compact SUV, the Song Pro, is the first plug-in hybrid engine dedicated to biofuel, according to Wang Chuanfu, BYD’s founder and CEO.Margaret Wooldridge, a professor of mechanical engineering at the University of Michigan, in Ann Arbor, says the engine’s promise is not in inventing entirely new technology, but in making it accessible.RELATED: The Omnivorous Engine“The technology existed before,” says Wooldridge, who specializes in hybrid systems, “but fuel switching is expensive, and I’d expect the combinations in this engine to come at a fairly high price tag. BYD’s real innovation is pulling it into a price range where everyday drivers in Brazil can actually choose ratios of ethanol and gasoline, as well as electric.”BYD’s Affordable Hybrid InnovationBYD Song Pro vehicles with this new engine were initially priced in a promotion at around US $25,048, with a list price around $35,000. For comparison, another plug-in hybrid vehicle, Toyota’s 2026 Prius Prime, starts at $33,775. The engine is the product of an $18.5 million investment by BYD and a collaboration between Brazilian and Chinese scientists. It adds to Brazil’s history of ethanol use that began in the 1930s and progressed from ethanol-only to flex-fuel vehicles, providing consumers a tool kit to respond to changing fuel prices, ongoing drought like Brazil experienced in the 1980s, or emissions goals.An engine switching between gasoline and ethanol needs a sensor that can reconcile two distinct fuel-air mixtures. “Integrating that control system, especially in a hybrid architecture, is not trivial,” says Wooldridge. “But BYD appears to have engineered it in a way that’s cost-effective.”By leveraging a smaller, downsized hybrid engine, the company is likely able to design the engine to be optimal over a smaller speed map—a narrower, specific range of speeds and power output—avoiding some efficiency compromises that have long plagued flex-fuel power-train engines, says Wooldridge.In general, standard flex-fuel vehicles (FFVs) have an internal combustion engine and can operate on gasoline and any blend of gasoline and ethanol up to 83 percent, according to the U.S. Department of Energy. FFV engines have only one fuel system, and mostly use components that are the same as those found in gasoline-only cars. To compensate for ethanol’s different chemical properties and power output compared to gasoline, special components modify the fuel pump and fuel-injection system. In addition, FFV engines have engine control modules calibrated to accommodate ethanol’s higher oxygen content.“Flex-fuel gives consumers flexibility,” Wooldridge says. “If you’re using ethanol, you can run at a higher compression ratio, allowing molecules to be squeezed into a smaller space to allow for faster, more powerful and more efficient combustion. Increasing that ratio boosts efficiency and lowers knock—but if you’re also tying in electric drive, the system can stay optimally efficient across different modes,” she adds.Jennifer Eaglin, a historian of Brazilian energy at Ohio State University, in Columbus, says that BYD is tapping into something deeply rooted in the culture of Brazil, the world’s seventh-most populous country (with a population of around 220 million).“Brazil has built an ethanol-fuel system that’s durable and widespread,” Eaglin says. “It’s no surprise that a company like BYD, recognizing that infrastructure, would innovate to give consumers more options. This isn’t futuristic—it’s a continuation of a long national experiment.”

04.12.2025 20:45:47

Technologie a věda
8 dní

CADmore Metal has introduced a fresh take on 3D printing metal components to the North American market known as cold metal fusion (CMF). John Carrington, the company’s CEO, claims CMF produces stronger 3D printed metal parts that are cheaper and faster to make. That includes titanium components, which have historically caused trouble for 3D printers.3D printing has used metals included aluminum, powdered steel, and nickel alloys for some time. While titanium parts are in high demand in fields such as aerospace and health care due to their superior strength-to-weight ratio, corrosion resistance, and their suitability for complex geometries, the metal has presented challenges for 3D printers.Titanium becomes more reactive at high temperatures and tends to crack when the printed part cools. It can also become brittle as it absorbs hydrogen, oxygen, or nitrogen during the printing process. Carrington says CMF overcomes these issues.“Our primary customers tend to come from the energy, defense, and aerospace industries,” says Carrington. “One large defense contractor recently switched from traditional 3D printing to CMF as it will save them millions and reduce prototyping and parts production by months.”How CMF Enhances Titanium 3D Printing EfficiencyCMF combines the flexibility of 3D printing with new powder metallurgy processes to provide strength and greater durability to parts made from titanium and many other metals and alloys. The process uses a combination of proprietary metal powder and polymer binding agents that are fused layer by layer to create high-strength metal components.The process begins like any other 3D printing project: A digital file that represents the desired 3D object directs the actions of a standard industrial 3D printer in laying down a mixture of metal and a plastic binder. A laser lightly fuses each layer of powder into a cohesive solid structure. Excess powder is removed for reuse.Where CMF differs is that the initial parts generated by this stage of the process are strong enough for grinding, drilling, and milling if required. The parts then soak in a solvent to dissolve the plastic binder. Next, they go into a furnace to burn off any remaining binder, fuse the metal particles, and compact them into a dense metal component. Surface or finishing treatments can then be applied such as polishing and heat treatment.“Our cold metal fusion technology offers a process that is at least three times faster and more scalable than any other kind of 3D printing,” says Carrington. “Per-part prices are generally 50 to 60 percent less than alternative metal 3D printing technology. We expect those prices to go down even more as we scale.” 3D printing with metal powders such as titanium makes it possible to create parts with complex geometries.CADmore MetalThe material used in CMF was developed by Headmade Materials, a German company. Headmade holds a patent on this 3D printing feedstock, which has been designed for use by the existing ecosystem of 3D printing machines. CADmore Metal serves as the exclusive North American distributor for the metal powders used in CMF. The company can also serve as a systems integrator for the entire process by providing the printing and sintering hardware, the specialized powders, process expertise, training, and technical support.“We provide guidance on design optimization and integration with existing workflows to help customers maximize the technology’s benefits,” says Carrington. “If a turbine company comes to us to produce their parts using CMF, we can either build the parts for them as a service or set them up to carry out their own production internally while we supply the powder and support.”With the global 3D printing market now worth almost US $5 billion and predicted to reach $13 billion by 2035, according to analyst firm IDTechEx, the arrival of CMF is timely. CADmore Metal just opened North America’s first CMF application center, a nearly 280-square-meter (3,000-square-foot) facility in Columbia, S.C. Carrington says that a larger facility will open in 2026 to make room for more material processing and equipment.

03.12.2025 19:55:59

Technologie a věda
8 dní

Daniela Rus has spent her career breaking barriers—scientific, social, and material—in her quest to build machines that amplify rather than replace human capability. She made robotics her life’s work, she says, because she understood it was a way to expand the possibilities of computing while enhancing human capabilities.“I like to think of robotics as a way to give people superpowers,” Rus says. “Machines can help us reach farther, think faster, and live fuller lives.”Daniela RusEmployer MITJob titleProfessor of electrical and computer engineering and computer science; director of the MIT Computer Science and Artificial Intelligence LaboratoryMember gradeFellowAlma maters University of Iowa, in Iowa City; CornellHer dual missions, she says, are to make technology humane and to make the most of the opportunities afforded by life in the United States. The two goals have fueled her journey from a childhood living under a dictatorship in Romania to the forefront of global robotics research.Rus, who is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is the recipient of this year’s IEEE Edison Medal, which recognizes her for “sustained leadership and pioneering contributions in modern robotics.”An IEEE Fellow, she describes the recognition as a responsibility to further her work and mentor the next generation of roboticists entering the field.The Edison Medal is the latest in a string of honors she has received. In 2017 she won an Engelberger Robotics Award from the Robotic Industries Association. The following year, she was honored with the Pioneer in Robotics and Automation Award by the IEEE Robotics and Automation Society. The society recognized her again in 2023 with its IEEE Robotics and Automation Technical Field Award.From Romania to Iowa Rus was born in Cluj-Napoca, Romania, during the rule of dictator Nicolae Ceausescu. Her early life unfolded in a world defined by scarcity—rationed food, intermittent electricity, and a limited ability to move up or out. But she recalls that, amid the stifling insufficiencies, she was surrounded by an irrepressible warmth and intellectual curiosity—even when she was making locomotive screws in a state-run factory as part of her school’s curriculum.“Life was hard,” she says, “but we had great teachers and strong communities. As a child, you adapt to whatever is around you.”Her father, Teodor, was a computer scientist and professor, and her mother, Elena, was a physicist.In 1982, when she was 19, Rus’s father emigrated to the United States to join the faculty at the University of Iowa, in Iowa City. It was an act of courage and conviction. Within a year, Daniela and her mother joined him there.“He wanted the freedom to think, to publish, to explore ideas,” Rus says. “And I reaped the benefits of being free from the limitations of our homeland.”America’s open horizons were intoxicating, she says.A lecture that changed everythingRus decided to pursue a degree at her father’s university, where her life changed direction, she says. One afternoon, John Hopcroft—a Turing Award–winning Cornell computer scientist renowned for his work on algorithms and data structures—gave a talk on campus. His message was simple but electrifying, Rus says: Classical computer science had been solved. The next frontier, Hopcroft declared, was computations that interact with the messy physical world.For Rus, the idea was a revelation.“It was as if a door had opened,” she says. “I realized the future of computing wasn’t just about logic and code; it was about how machines can perceive, move, and help us in the real world.”After the lecture, she introduced herself to Hopcroft and told him she wanted to learn from him. Not long after earning her bachelor’s degree in computer science and mathematics in 1985, she applied to get a master’s degree at Cornell, where Hopcroft became her graduate advisor. Rus developed algorithms there for dexterous robotic manipulation—teaching machines to grasp and move objects with precision. She earned her master’s in computer science in 1990, then stayed on at Cornell to work toward a doctorate.“I like to think of robotics as a way to give people superpowers. Machines can help us reach farther, think faster, and live fuller lives.”In 1993 she earned her Ph.D. in computer science, then took a position as an assistant professor of computer science at Dartmouth College, in Hanover, N.H. She founded the college’s robotics laboratory and expanded her work into distributed robotics. She developed teams of small robots that cooperated to perform tasks such as ensuring products in warehouses are correctly gathered to fulfill orders, get packaged safely, and are routed to their respective destinations efficiently.Despite a lack of traditional machine shop facilities for fabrication on the Hanover campus, Rus found a way. She pioneered the use of 3D printing to rapidly prototype and build robots.In 2003 she left Dartmouth to become a professor in the electrical engineering and computer science department at MIT.The robotics lab she created at Dartmouth moved with her to MIT and became known as the Distributed Robotics Laboratory (DRL). In 2012 she was named director of MIT’s Computer Science and Artificial Intelligence Laboratory, the school’s largest interdisciplinary lab, with 60 research groups including the DRL. She also continues to serve as the DRL’s principal investigator.The science of physical intelligenceRus now leads pioneering research at the intersection of AI and robotics, a field she calls physical intelligence. It’s “a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time,” she says.Her lab builds soft-body robots inspired by nature that can sense, adapt, and learn. They are AI-driven systems that passively handle tasks—such as self-balancing and complex articulation similar to that done by the human hand—because their shape and materials minimize the need for heavy processing.Such machines, she says, someday will be able to navigate different environments, perform useful functions without external control, and even recover from disturbances to their route planning. Researchers also are exploring ways to make them more energy-efficient.One prototype developed by Rus’s team is designed to retrieve foreign objects from the body, including batteries swallowed by children. The ingestible robots are artfully folded, similar to origami, so they are small enough to be swallowed. Embedded magnetic materials allow doctors to steer the soft robots and control their shape. Upon arriving in the stomach, a soft bot can be programmed to wrap around a foreign object and guide it safely out of the patient’s body.CSAIL researchers also are working on small robots that can carry a medication and release it at a specific area within the digestive tract, bypassing the stomach acid known to diminish some drugs’ efficacy. Ingestible robots also could patch up internal injuries or ulcers. And because they’re made from digestible materials such as sausage casings and biocompatible polymers, the robots can perform their assigned tasks and then get safely absorbed by the body, she says.Health care isn’t the only application on the horizon for such AI-driven technologies. Robots with physical intelligence might someday help firefighters locate people trapped in burning buildings, find miners after a cave-in, and provide valuable situational awareness information to emergency response teams in the aftermath of natural disasters, Rus says.“What excites me is the possibility of giving people new powers,” she says. “Machines that can think and move safely in the physical world will let us extend human reach—at work, at home, in medicine … everywhere.”To make such a vision a reality, she has expanded her technical interests to include several complementary lines of research.She’s working on self-reconfiguring and modular robots such as MIT’s M-Blocks and NASA’s SuperBots, which can attach, detach, and rearrange themselves to form shapes suited for different actions such as slithering, climbing, and crawling.With networked robots—including those Amazon uses in its warehouses—thousands of machines can operate as a large adaptive system. The machines communicate continuously to divide tasks, avoid collisions, and optimize package routing.Rus’s team also is making advances in human-robot interaction, such as reading brainwave activity and interpreting sign language through a smart glove.To further her plan of putting all the computerized smarts the robots need within their physical bodies instead of in the cloud, she helped found Liquid AI in 2023. The company, based in Cambridge, Mass., develops liquid neural networks, inspired by the simple brains of worms, that can learn and adapt continuously. The word liquid in this case refers to the adaptability, flexibility, and dynamic nature of the team’s model architecture. It can change shape and adapt to new data inputs, and it fits within constraints imposed by the hardware in which it’s contained, she says.Finding community in IEEERus joined IEEE at one of its robotics conferences when she was a graduate student.“I think I signed up just to get the student discount,” she says with a laugh. “But IEEE turned out to be the place where my community lived.”She credits the organization’s conferences, journals, and collaborative spirit with shaping her professional growth.“The exchange of ideas, the chance to test your thinking against others—it’s invaluable,” she says. “It’s how our field moves forward.”Rus continues to serve on IEEE panels and committees, mentoring the next generation of roboticists.“IEEE gave me a platform,” Rus says. “It taught me how to communicate, how to lead, and how to dream bigger.”Living the American dreamLooking back, Rus sees her story as a testament to unforeseen possibilities.“When I was growing up in Romania, I couldn’t even imagine living in America,” she says. “Now I’m here, working with brilliant students, building robots that help people, and trying to make a difference. I feel like I’m living the American dream.”In a nod to a memorable song from the Broadway musical Hamilton, Rus echoes Alexander Hamilton’s determination to make the most of his opportunities, saying, “I don’t ever want to throw away my shot.”

03.12.2025 19:00:02

Technologie a věda
8 dní

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Taro and delivered to your inbox for free!A word that frequently comes up in career conversations is, unfortunately, “toxic.” The engineers I speak with will tell me that they’re dealing with a toxic manager, a toxic teammate, or a toxic work culture. When you find yourself in a toxic work environment, what should you do?Is it worth trying to improve things over time, or should you just leave? The difficult truth is that, in nearly every case, the answer is to leave a toxic team as soon as you can. Here’s why:If you’re earlier in your career, you frankly don’t have much political power in the organization. Any arguments to change team culture or address systemic problems will likely fall on deaf ears. You’ll end up frustrated, and your efforts will be wasted.If you’re more senior, you have some ability to improve processes and relationships on the team. However, if you’re an individual contributor (IC), your capabilities are still limited. There is likely some “low-hanging fruit” of quick improvements to suggest. A few thoughtful pieces of feedback could address many of the problems. If you’ve done that and things are still not getting better, it’s probably time to leave.If you’re part of upper management, you may have inherited the problem, or maybe you were even brought in to solve it. This is the rare case where you could consider the change scenario and address the broken culture: You have both the context and power to make a difference.The world of technology is large, and constantly getting larger. Don’t waste your time on a bad team or with a bad manager. Find another team, company, or start something on your own.Engineers often hesitate to leave a poor work environment because they’re afraid or unsure about the process of finding something new. That’s a valid concern. However, inertia should not be the reason you stick around in a job. The best careers stem from the excitement of actively choosing your work, not tolerating toxicity.Finally, it’s worth noting that even in a toxic team, you’ll still come across smart and kind people. If you are stuck on a bad team, seek out the people who match your wavelength. These relationships will enable you to find new opportunities when you inevitably decide to leave!—RahulIEEE Podcast Focuses on Women in TechAre you looking for a new podcast to add to your queue? IEEE Women in Engineering recently launched a podcast featuring experts from around the world to discuss workplace challenges and amplify the diverse experience of women from various STEM fields. New episodes are released on the third Wednesday of each month. Read more here. How to Think Like an EntrepreneurEntrepreneurship is a skill that can benefit all engineers. The editor in chief of IEEE Engineering Management Review shares his tips for acting more like an entrepreneur, from changing your mode of thinking to executing a plan. “The shift from ‘someone should’ to ‘I will’ is the start of entrepreneurial thinking,” the author writes. Read more here. Cultivating Innovation in a Research LabIn a piece for Communications of the ACM, a former employee of Xerox PARC reflects on the lessons he learned about managing a research lab. The philosophies that underpin innovative labs, the author says, require a different approach than those focused on delivering products or services. See how these unwritten rules can help cultivate breakthroughs. Read more here.

03.12.2025 15:50:50

Technologie a věda
9 dní

When the head of Nokia Bell Labs core research talks about “lessons learned” from 5G, he’s also being candid about the ways in which not everying worked out quite as planned.That candor matters now, too, because Bell Labs core research president Peter Vetter says 6G’s success depends on getting infrastructure right the first time—something 5G didn’t fully do.By 2030, he says, 5G will have exhausted its capacity. Not because some 5G killer app will appear tomorrow, suddenly making everyone’s phones demand 10 or 100 times as much data capacity as they require today. Rather, by the turn of the decade, wireless telecom won’t be centered around just cellphones anymore.AI agents, autonomous cars, drones, IoT nodes, and sensors, sensors, sensors: Everything in a 6G world will potentially need a way on to the network. That means more than anything else in the remaining years before 6G’s anticipated rollout, high-capacity connections behind cell towers are a key game to win. Which brings industry scrutiny, then, to what telecom folks call backhaul—the high-capacity fiber or wireless links that pass data from cell towers toward the internet backbone. It’s the difference between the “local” connection from your phone to a nearby tower and the “trunk” connection that carries millions of signals simultaneously. But the backhaul crisis ahead isn’t just about capacity. It’s also about architecture. 5G was designed around a world where phones dominated, downloading video at higher and higher resolutions. 6G is now shaping up to be something else entirely. This inversion—from 5G’s anticipated downlink deluge to 6G’s uplink resurgence—requires rethinking everything at the core level, practically from scratch.Vetter’s career spans the entire arc of the wireless telecom era—from optical interconnections in the 1990s at Alcatel (a research center pioneering fiber-to-home connections) to his roles at Bell Labs and later Nokia Bell Labs, culminating in 2021 in his current position at the industry’s bellwether institution.In this conversation, held in November at the Brooklyn 6G Summit in New York, Vetter explains what 5G got wrong, what 6G must do differently, and whether these innovations can arrive before telecom’s networks start running out of room.5G’s Expensive MiscalculationIEEE Spectrum: Where is telecom today, halfway between 5G’s rollout and 6G’s anticipated rollout?Peter Vetter: Today, we have enough spectrum and capacity. But going forward, there will not be enough. The 5G network by the end of the decade will run out of steam, as we see in our traffic simulations and forecasts. And it is something that has been consistent generation to generation, from 2G to 3G to 4G. Every decade, capacity goes up by about a factor of 10. So you need to prepare for that.And the challenge for us as researchers is how do you do that in an energy-efficient way? Because the power consumption cannot go up by a factor of 10. The cost cannot go up by a factor of 10. And then, lesson learned from 5G: The idea was, “Oh, we do that in higher spectrum. There is more bandwidth. Let’s go to millimeter wave.” The lesson learned is, okay, millimeter waves have short reach. You need a small cell [tower] every 300 meters or so. And that doesn’t cut it. It was too expensive to install all these small cells.Is this related to the backhaul question?Vetter: So backhaul is the connection between the base station and what we call the core of the network—the data centers, and the servers. Ideally, you use fiber to your base station. If you have that fiber as a service provider, use it. It gives you the highest capacity. But very often new cell sites don’t have that fiber backhaul, then there are alternatives: wireless backhaul. Nokia Bell Labs has pioneered a glass-based chip architecture for telecom’s backhaul signals, communicating between towers and telecom infrastructure.NokiaRadios Built on Glass Push Frequencies HigherWhat are the challenges ahead for wireless backhaul?Vetter: To get up to the 100-gigabit-per-second, fiber-like speeds, you need to go to higher frequency bands.Higher frequency bands for the signals the backhaul antennas use?Vetter: Yes. The challenge is the design of the radio front ends and the radio-frequency integrated circuits (RFICs) at those frequencies. You cannot really integrate [present-day] antennas with RFICs at those high speeds.And what happens as those signal frequencies get higher?Vetter: So in a millimeter wave, say 28 gigahertz, you could still do [the electronics and waveguides] for this with a classical printed circuit board. But as the frequencies go up, the attenuation gets too high.What happens when you get to, say, 100 GHz?Vetter: [Conventional materials] are no good anymore. So we need to look at other still low-cost materials. We have done pioneering work at Bell Labs on radio on glass. And we use glass not for its optical transparency, but for its transparency in the subterahertz radio range.Is Nokia Bell Labs making these radio-on-glass backhaul systems for 100-GHz communications?Vetter: Above 100 GHz, you need to look into a different material. I used an order of magnitude, but [the wavelength range] is actually 140 to 170 GHz, what is called the D-Band.We collaborate with our internal customers to get these kind of concepts on the long-term road map. As an example, that D-Band radio system, we actually integrated it in a prototype with our mobile business group. And we tested it last year at the Olympics in Paris.But this is, as I said, a prototype. We need to mature the technology between a research prototype and qualifying it to go into production. The researcher on that is Shahriar Shahramian. He’s well-known in the field for this.Why 6G’s Bandwidth Crisis Isn’t About PhonesWhat will be the applications that’ll drive the big 6G demands for bandwidth?Vetter: We’re installing more and more cameras and other types of sensors. I mean, we’re going into a world where we want to create large world models that are synchronous copies of the physical world. So what we will see going forward in 6G is a massive-scale deployment of sensors which will feed the AI models. So a lot of uplink capacity. That’s where a lot of that increase will come from.Any others?Vetter: Autonomous cars could be an example. It can also be in industry—like a digital twin of a harbor, and how you manage that? It can be a digital twin of a warehouse, and you query the digital twin, “Where is my product X?” Then a robot will automatically know thanks to the updated digital twin where it is in the warehouse and which route to take. Because it knows where the obstacles are in real time, thanks to that massive-scale sensing of the physical world and then the interpretation with the AI models.You will have your agents that act on behalf of you to do your groceries or order a driverless car. They will actively record where you are, make sure that there are also the proper privacy measures in place. So that your agent has an understanding of the state you’re in and can serve you in the most optimal way.How 6G Networks Will Help Detect Drones, Earthquakes, and TsunamisYou’ve described before how 6G signals can not only transmit data but also provide sensing. How will that work?Vetter: The augmentation now is that the network can be turned also in a sensing modality. That if you turn around the corner, a camera doesn’t see you anymore. But the radio still can detect people that are coming, for instance, at a traffic crossing. And you can anticipate that. Yeah, warn a car that, “There’s a pedestrian coming. Slow down.” We also have fiber sensing. And for instance, using fibers at the bottom of the ocean and detecting movements of waves and detect tsunamis, for instance, and do an early tsunami warning.What are your teams’ findings?Vetter: The present-day use of tsunami warning buoys are a few hundred kilometers offshore. These tsunami waves travel at 300 and more meters per second, and so you only have 15 minutes to warn the people and evacuate. If you have now a fiber sensing network across the ocean that you can detect it much deeper in the ocean, you can do meaningful early tsunami warning.We recently detected there was a major earthquake in East Russia. That was last July. And we had a fiber sensing system between Hawaii and California. And we were able to see that earthquake on the fiber. And we also saw the development of the tsunami wave.6G’s Thousands of Antennas and Smarter WaveformsBell Labs was an early pioneer in multiple-input, multiple-output (MIMO) antennas starting in the 1990s. Where multiple transmit and receive antennas could carry many data streams at once. What is Bell Labs doing with MIMO now to help solve these bandwidth problems you’ve described?Vetter: So, as I said earlier, you want to provide capacity from existing cell sites. And the way to MIMO can do that by a technology called a simplified beamforming: If you want better coverage at a higher frequency, you need to focus your electromagnetic energy, your radio energy, even more. So in order to do that, you need a larger amount of antennas.So if you double the frequency, we go from 3.5 GHz, which is the C-band in 5G, now to 6G, 7 GHz. So it’s about double. That means the wavelength is half. So you can fit four times more antenna elements in the same form factor. So physics helps us in that sense.What’s the catch?Vetter: Where physics doesn’t help us is more antenna elements means more signal processing, and the power consumption goes up. So here is where the research then comes in. Can we creatively get to these larger antenna arrays without the power consumption going up?The use of AI is important in this. How can we leverage AI to do channel estimation, to do such things as equalization, to do smart beamforming, to learn the waveform, for instance?We’ve shown that with these kind of AI techniques, we can get actually up to 30 percent more capacity on the same spectrum.And that allows many gigabits per second to go out to each phone or device?Vetter: So gigabits per second is already possible in 5G. We’ve demonstrated that. You can imagine that this could go up, but that’s not really the need. The need is really how many more can you support from a base station?

02.12.2025 21:17:22

Technologie a věda
10 dní

Talking to Robert N. Charette can be pretty depressing. Charette, who has been writing about software failures for this magazine for the past 20 years, is a renowned risk analyst and systems expert who over the course of a 50-year career has seen more than his share of delusional thinking among IT professionals, government officials, and corporate executives, before, during, and after massive software failures.In 2005’s “Why Software Fails,” in IEEE Spectrum, a seminal article documenting the causes behind large-scale software failures, Charette noted, “The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don’t see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.”Two decades and several trillion wasted dollars later, he finds that people are making the same mistakes. They claim their project is unique, so past lessons don’t apply. They underestimate complexity. Managers come out of the gate with unrealistic budgets and timelines. Testing is inadequate or skipped entirely. Vendor promises that are too good to be true are taken at face value. Newer development approaches like DevOps or AI copilots are implemented without proper training or the organizational change necessary to make the most of them.What’s worse, the huge impacts of these missteps on end users aren’t fully accounted for. When the Canadian government’s Phoenix paycheck system initially failed, for instance, the developers glossed over the protracted financial and emotional distress inflicted on tens of thousands of employees receiving erroneous paychecks; problems persist today, nine years later. Perhaps that’s because, as Charette told me recently, IT project managers don’t have professional licensing requirements and are rarely, if ever, held legally liable for software debacles.While medical devices may seem a far cry from giant IT projects, they have a few things in common. As Special Projects Editor Stephen Cass uncovered in this month’s The Data, the U.S. Food and Drug Administration recalls on average 20 medical devices per month due to software issues.“Software is as significant as electricity. We would never put up with electricity going out every other day, but we sure as hell have no problem having AWS go down.” —Robert N. CharetteLike IT projects, medical devices face fundamental challenges posed by software complexity. Which means that testing, though rigorous and regulated in the medical domain, can’t possibly cover every scenario or every line of code. The major difference between failed medical devices and failed IT projects is that a huge amount of liability attaches to the former.“When you’re building software for medical devices, there are a lot more standards that have to be met and a lot more concern about the consequences of failure,” Charette observes. “Because when those things don’t work, there’s tort law available, which means manufacturers are on the hook. It’s much harder to bring a case and win when you’re talking about an electronic payroll system.”Whether a software failure is hyperlocal, as when a medical device fails inside your body, or spread across an entire region, like when an airline’s ticketing system crashes, organizations need to dig into the root causes and apply those lessons to the next device or IT project if they hope to stop history from repeating itself.“Software is as significant as electricity,” Charette says. “We would never put up with electricity going out every other day, but we sure as hell have no problem accepting AWS going down or telcos or banks going out.” He lets out a heavy sigh worthy of A.A. Milne’s Eeyore. “People just kind of shrug their shoulders.”

01.12.2025 19:52:15

Technologie a věda
10 dní

Innovation, expertise, and efficiency often take center stage in the engineering world. Yet engineering’s impact lies not only in technical advancement but also in its ability to serve the greater good. This foundational principle is behind IEEE’s public imperative initiatives which apply our efforts and expertise to support our mission to advance technology for humanity with a direct benefit to society. Serving society Public imperative activities and initiatives serve society by promoting understanding, impact for humans and our environment, and responsible use of science and technology. These initiatives encompass a wide range of efforts, including STEM outreach, humanitarian technology deployments, public education on emerging technologies, and sustainability. Unlike many efforts advancing technology, these initiatives are not designed with financial opportunity in mind. Instead, they fulfill IEEE’s designation as a 501(c)(3) public charity engaged in scientific and educational activities for the benefit of the engineering community and the public.Building a Better WorldAcross the globe, IEEE members and volunteers dedicate their time and use their talents, experiences, and expertise to lead, organize, and drive activities to advance technology for humanity. The IEEE Social Impact report showcases a selection of recent projects and initiatives that support that mission.In my March column, I described my vision for One IEEE, which is aimed at empowering IEEE’s diverse units to work together in ways that magnify their individual and collective impact. Within the framework of One IEEE, public imperative activities are not peripheral; they are central to unifying the organization and amplifying our global relevance. Across IEEE’s varied regions, societies, and technical communities, these activities align efforts around a shared mission. They provide our members from different disciplines and geographies the opportunity to collaborate on projects that transcend boundaries, fostering interdisciplinary innovation and global stewardship.Such activities also offer members opportunities to apply their technical expertise in service of societal needs. Whether finding innovative solutions to connect the unconnected or developing open-source educational tools for students, we are solving real-world problems. The initiatives transform abstract technical knowledge into actionable solutions, reinforcing the idea that technology is not just about building systems—it’s about building futures.For our young professionals and students, these activities offer hands-on experiences that connect technical skills with real-world applications, inspiring the next generation to pursue careers in engineering with purpose and passion. These activities also create mentorship opportunities, leadership pathways, and a sense of belonging within the wider IEEE community.Principled tech leaderIn an age when technology influences practically every aspect of life—from health care and energy to communication and transportation—IEEE must, as a leading technical authority, also serve as a socially responsible leader. Public imperative activities include IEEE’s commitment to ethical development, university and pre-university education, and accessible innovation. They help bridge the gap between technical communities and the public, working to ensure that engineering solutions are accessible, equitable, and aligned with societal values.From a strategic standpoint, public imperatives also support IEEE’s long-term sustainability. The organization is redesigning its budget process to emphasize aligning financial resources with mission-driven goals. One of the guiding principles is to publicize IEEE’s public charity status and invest accordingly.That means promoting our public imperatives in funding decisions, integrating them into operational planning, and measuring their outcomes with engineering rigor. By treating these activities as core infrastructure, IEEE ensures that its resources are deployed in ways that maximize public benefit and organizational impact.Public imperatives are vital to the success of One IEEE. They embody the organization’s mission, unify its global membership, and demonstrate the societal relevance of engineering and technology. They offer our members the opportunity to apply their skills in meaningful ways, contribute to public good, and shape the future of technology with integrity.Through our public imperative activities, IEEE is a force for innovation and a driver of meaningful impact.This article appears in the December 2025 print issue as “Engineering With Purpose.”

01.12.2025 19:00:02

Technologie a věda
10 dní

For the past decade, progress in artificial intelligence has been measured by scale: bigger models, larger datasets, and more compute. That approach delivered astonishing breakthroughs in large language models (LLMs); in just five years, AI has leapt from models like GPT-2, which could hardly mimic coherence, to systems like GPT-5 that can reason and engage in substantive dialogue. And now early prototypes of AI agents that can navigate codebases or browse the web point towards an entirely new frontier.But size alone can only take AI so far. The next leap won’t come from bigger models alone. It will come from combining ever-better data with worlds we build for models to learn in. And the most important question becomes: What do classrooms for AI look like?In the past few months Silicon Valley has placed its bets, with labs investing billions in constructing such classrooms, which are called reinforcement learning (RL) environments. These environments let machines experiment, fail, and improve in realistic digital spaces. AI Training: From Data to ExperienceThe history of modern AI has unfolded in eras, each defined by the kind of data that the models consumed. First came the age of pretraining on internet-scale datasets. This commodity data allowed machines to mimic human language by recognizing statistical patterns. Then came data combined with reinforcement learning from human feedback—a technique that uses crowd workers to grade responses from LLMs—which made AI more useful, responsive, and aligned with human preferences.We have experienced both eras firsthand. Working in the trenches of model data at Scale AI exposed us to what many consider the fundamental problem in AI: ensuring that the training data fueling these models is diverse, accurate, and effective in driving performance gains. Systems trained on clean, structured, expert-labeled data made leaps. Cracking the data problem allowed us to pioneer some of the most critical advancements in LLMs over the past few years.Today, data is still a foundation. It is the raw material from which intelligence is built. But we are entering a new phase where data alone is no longer enough. To unlock the next frontier, we must pair high-quality data with environments that allow limitless interaction, continuous feedback, and learning through action. RL environments don’t replace data; they amplify what data can do by enabling models to apply knowledge, test hypotheses, and refine behaviors in realistic settings.How an RL Environment WorksIn an RL environment, the model learns through a simple loop: it observes the state of the world, takes an action, and receives a reward that indicates whether that action helped accomplish a goal. Over many iterations, the model gradually discovers strategies that lead to better outcomes. The crucial shift is that training becomes interactive—models aren’t just predicting the next token but improving through trial, error, and feedback.For example, language models can already generate code in a simple chat setting. Place them in a live coding environment—where they can ingest context, run their code, debug errors, and refine their solution—and something changes. They shift from advising to autonomously problem-solving.This distinction matters. In a software-driven world, the ability for AI to generate and test production-level code in vast repositories will mark a major change in capability. That leap won’t come solely from larger datasets; it will come from immersive environments where agents can experiment, stumble, and learn through iteration—much like human programmers do. The real world of development is messy: Coders have to deal with underspecified bugs, tangled codebases, vague requirements. Teaching AI to handle that mess is the only way it will ever graduate from producing error-prone attempts to generating consistent and reliable solutions.Can AI Handle the Messy Real World?Navigating the internet is also messy. Pop-ups, login walls, broken links, and outdated information are woven throughout day-to-day browsing workflows. Humans handle these disruptions almost instinctively, but AI can only develop that capability by training in environments that simulate the web’s unpredictability. Agents must learn how to recover from errors, recognize and persist through user-interface obstacles, and complete multi-step workflows across widely used applications.Some of the most important environments aren’t public at all. Governments and enterprises are actively building secure simulations where AI can practice high-stakes decision-making without real-world consequences. Consider disaster relief: It would be unthinkable to deploy an untested agent in a live hurricane response. But in a simulated world of ports, roads, and supply chains, an agent can fail a thousand times and gradually get better at crafting the optimal plan.Every major leap in AI has relied on unseen infrastructure, such as annotators labeling datasets, researchers training reward models, and engineers building scaffoldings for LLMs to use tools and take action. Finding large-volume and high-quality datasets was once the bottleneck in AI, and solving that problem sparked the previous wave of progress. Today, the bottleneck is not data—it’s building RL environments that are rich, realistic, and truly useful.The next phase of AI progress won’t be an accident of scale. It will be the result of combining strong data foundations with interactive environments that teach machines how to act, adapt, and reason across messy real-world scenarios. Coding sandboxes, OS and browser playgrounds, and secure simulations will turn prediction into competence.

01.12.2025 13:00:02

Technologie a věda
11 dní

Introduced in 1930 by Lionel Corp.—better known for its electric model trains—the fully functional toy stove shown at top had two electric burners and an oven that heated to 260 °C. It came with a set of cookware, including a frying pan, a pot with lid, a muffin tin, a tea kettle, and a wooden potato masher. I would have also expected a spoon, whisk, or spatula, but maybe most girls already had those. Just plug in the toy, and housewives-in-training could mimic their mothers frying eggs, baking muffins, or boiling water for tea.A brief history of toy stovesEven before electrification, cast-iron toy stoves had become popular in the mid-19th century. At first fueled by coal or alcohol and later by oil or gas, these toy stoves were scaled-down working equivalents of the real thing. Girls could use their stoves along with a toy waffle iron or small skillet to whip up breakfast. If that wasn’t enough fun, they could heat up a miniature flatiron and iron their dolls’ clothes. Designed to help girls understand their domestic duties, these toys were the gendered equivalent of their brothers’ toy steam engines. If you’re thinking fossil-fuel-powered “educational toys” are a recipe for disaster, you are correct. Many children suffered serious burns and sometimes death by literally playing with fire. Then again, people in the 1950s thought playing with uranium was safe.When electric toy stoves came on the scene in the 1910s, things didn’t get much safer, as the new entrants also lacked basic safety features. The burners on the 1930 Lionel range, for example, could only be turned off or on, but at least kids weren’t cooking over an open flame. At 86 centimeters tall, the Lionel range was also significantly larger than its more diminutive predecessors. Just the right height for young children to cook standing up. Western Electric’s Junior Electric Range was demonstrated at an expo in 1915 in New York City.The StrongWell before the Lionel stove, the Western Electric Co. had a cohort of girls demonstrating its Junior Electric Range at the Electrical Exposition held in New York City in 1915. The Junior Electric held its own in a display of regular sewing-machine motors, vacuum cleaners, and electric washing machines.The Junior Electric stood about 30 cm tall with six burners and an oven. The electric cord plugged into a light fixture socket. Children played with it while sitting on the floor or as it sat on a table. A visitor to the Expo declared the miniature range “the greatest electrical novelty in years.” Cooking by electricity in any form was still innovative—George A. Hughes had introduced his eponymous electric range just five years earlier. When the Junior Electric came along, less than a third of U.S. households had been wired for electric lights.How electricity turned cooking into a scienceOne reason to give little girls working toy stoves was so they could learn how to differentiate between a hot flame and low heat and get a feel for cooking without burning the food. These are skills that come with experience. Directions like “bake until done in a moderate oven,” a common line in 19th-century recipes, require a lot more tacit knowledge than is needed to, say, throw together a modern boxed brownie mix. The latter comes with detailed instructions and assumes you can control your oven temperature to within a few degrees. That type of precision simply didn’t exist in the 19th century, in large part because it was so difficult to calibrate wood- or coal-burning appliances. Girls needed to start young to master these skills by the time they married and were expected to handle the household cooking on their own.Electricity changed the game.In his comparison of “fireless cookers,” an engineer named Percy Wilcox Gumaer exhaustively tested four different electric ovens and then presented his findings at the 32nd Annual Convention of the American Institute of Electrical Engineers (a forerunner of today’s IEEE) on 2 July 1915. At the time, metered electricity was more expensive than gas or coal, so Gumaer investigated the most economical form of cooking with electricity, comparing different approaches such as longer cooking at low heat versus faster cooking in a hotter oven, the effect of heat loss when opening the oven door, and the benefits of searing meat on the stovetop versus in the oven before making a roast.Gumaer wasn’t starting from scratch. Similar to how Yoshitada Minami needed to learn the ideal rice recipe before he could design an automatic rice cooker, Gumaer decided that he needed to understand the principles of roasting beef. Minami had turned to his wife, Fumiko, who spent five years researching and testing variations of rice cooking. Gumaer turned to the work of Elizabeth C. Sprague, a research assistant in nutrition investigations at the University of Illinois, and H.S. Grindley, a professor of general chemistry there.In their 1907 publication “A Precise Method of Roasting Beef,” Sprague and Grindley had defined qualitative terms like medium rare and well done by precisely measuring the internal temperature in the center of the roast. They concluded that beef could be roasted at an oven temperature between 100 and 200 °C.Continuing that investigation, Gumaer tested 22 roasts at 100, 120, 140, 160, and 180 °C, measuring the time they took to reach rare, medium rare, and well done, and calculating the cost per kilowatt-hour. He repeated his tests for biscuits, bread, and sponge cake.In case you’re wondering, Gumaer determined that cooking with electricity could be a few cents cheaper than other methods if you roasted the beef at 120 °C instead of 180 °C. It’s also more cost-effective to sear beef on the stovetop rather than in the oven. Biscuits tasted best when baked at 200 to 240 °C, while sponge cake was best between 170 and 200 °C. Bread was better at 180 to 240 °C, but too many other factors affected its quality. In true electrical engineering fashion, Gumaer concluded that “it is possible to reduce the art of cooking with electricity to an exact science.”Electric toy stoves as educational toolsThis semester, I’m teaching an introductory class on women’s and gender studies, and I told my students about the Lionel toy oven. They were horrified by the inherent danger. One incredulous student kept asking, “This is real? This is not a joke?” Instead of learning to cook with a toy that could heat to 260 °C, many of us grew up with the Easy-Bake Oven. The 1969 model could reach about 177° C with its two 100-watt incandescent light bulbs. That was still hot enough to cause burns, but somehow it seemed safer. (Since 2011, Easy-Bakes have used a heating element instead of lightbulbs.) The Queasy Bake Cookerator, designed to whip up “gross-looking, great-tasting snacks,” was marketed to boys. The StrongThe Easy-Bake I had wasn’t particularly gendered. It was orange and brown and meant to look like a different new-fangled appliance of the day, the microwave oven. But by the time my students were playing with Easy-Bake Ovens, the models were in the girly hues of pink and purple. In 2002, Hasbro briefly tried to lure boys by releasing the Queasy Bake Cookerator, which the company marketed with disgusting-sounding foods like Chocolate Crud Cake and Mucky Mud. The campaign didn’t work, and the toy was soon withdrawn.Similarly, Lionel’s electric toy range didn’t last long on the market. Launched in 1930, it had been discontinued by 1932, but that may have had more to do with timing. The toy cost US $29.50, the equivalent of a men’s suit, a new bed, or a month’s rent. In the midst of a global depression, the toy stove was an extravagance. Lionel reverted to selling electric trains to boys.My students discussed whether cooking is still a gendered activity. Although they agreed that meal prep disproportionately falls on women even now, they acknowledged the rise of the male chef and credited televised cooking shows with closing the gender gap. As a surprise, we discovered that one of the students in the class, Haley Mattes, competed in and won Chopped Junior as a 12-year-old.Haley had a play kitchen as a kid that was entirely fake: fake food, fake pans, fake utensils. She graduated to the Easy-Bake Oven, but really got into cooking the same way girls have done for centuries, by learning beside her grandmas.Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.An abridged version of this article appears in the December 2025 print issue as “Too Hot to Handle.”ReferencesI first came across a description of Western Electric’s Junior Electric Range in “The Latest in Current Consuming Devices,” in the November 1915 issue of Electrical Age.The Strong National Museum of Play, in Rochester, N.Y., has a large collection of both cast-iron and electric stoves. The Strong also published two blogs that highlighted Lionel’s toy: “Kids and Cooking” and “Lionel for Ladies?”Although Ron Hollander’s All Aboard! The Story of Joshua Lionel Cowen & His Lionel Train Company (Workman Publishing, 1981) is primarily about toy trains, it includes a few details about how Lionel marketed its electric toy stove to girls.

30.11.2025 13:00:01

Technologie a věda
12 dní

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.SOSV Robotics Matchup: 1–5 December 2025, ONLINEICRA 2026: 1–5 June 2026, VIENNAEnjoy today’s videos! Step behind the scenes with Walt Disney Imagineering Research & Development and discover how Disney uses robotics, AI, and immersive technology to bring stories to life! From the brand new self-walking Olaf in World of Frozen and BDX Droids to cutting-edge attractions like Millennium Falcon: Smugglers Run, see how magic meets innovation.[ Disney Experiences ]We just released a new demonstration of Mentee’s V3 humanoid robots completing a real world logistics task together. Over an uninterrupted 18-minute run, the robots autonomously move 32 boxes from eight piles to storage racks of different heights. The video shows steady locomotion, dexterous manipulation, and reliable coordination throughout the entire task.And there’s an uncut 18 minute version of this at the link.[ MenteeBot ]Thanks, Yovav!This video contains graphic depictions of simulated injuries. Viewer discretion is advised.In this immersive overview, guided by the DARPA Triage Challenge program manager, retired Army Col. Jeremy C. Pamplin, M.D., you’ll experience how teams of innovators, engineers, and DARPA are redefining the future of combat casualty care. Be sure to look all around! Check out competition runs, behind-the-scenes of what it takes to put on a DARPA Challenge, and glimpses into the future of lifesaving care.Those couple of minutes starting at 6:50 with the human medic and robotic teaming was particularly cool.[ DARPA ]You don’t need to build a humanoid robot if you can just make existing humanoids a lot better.I especially love 0:45 because you know what? Humanoids should spend more time sitting down, for all kinds of reasons. And of course, thank you for falling and getting up again, albeit on some of the squishiest grass on the planet.[ Flexion ]“Human-in-the-Loop Gaussian Splatting” wins best paper title of the week.[ Paper ] via [ IEEE Robotics and Automation Letters in IEEE Xplore ]Scratch that, “Extremum Seeking Controlled Wiggling for Tactile Insertion” wins best paper title of the week.[ University of Maryland PRG ]The battery swapping on this thing is... Unfortunate.[ LimX Dynamics ]To push the boundaries of robotic capability, researchers in the Department of Mechanical Engineering at Carnegie Mellon University in collaboration with The University of Washington and Google Deepmind, have developed a new tactile sensing system that enables four-legged robots to carry unsecured, cylindrical objects on their backs. This system, known as LocoTouch, features a network of tactile sensors that spans the robot’s entire back. As an object shifts, the sensors provide real-time feedback on its position, allowing the robot to continuously adjust its posture and movement to keep the object balanced.[ Carnegie Mellon University ]This robot is in more need of googly eyes than any other robot I’ve ever seen.[ Zarrouk Lab ]DPR Construction has deployed Field AI’s autonomy software on a quadruped robot at the company’s job site in Santa Clara, CA, to greatly improve its daily surveying and data collection processes. By automating what has traditionally been a very labor intensive and time consuming process, Field AI is helping the DPR team operate more efficiently and effectively, while increasing project quality.[ FieldAI ]In our second episode of AI in Motion, our host, Waymo AI researcher Vincent Vanhoucke, talks with a robotics startup founder Sergey Levine, who left a career in academic research to build better robots for the home and workplace.[ Waymo ]

29.11.2025 16:30:01

Technologie a věda
12 dní
Technologie a věda
13 dní

The EPICS (Engineering Projects in Community Service) in IEEE initiative had a record year in 2025, funding 48 projects involving nearly 1,000 students from 17 countries. The IEEE Educational Activities program approved the most projects this year, distributing US $290,000 in funding and engaging more students than ever before in innovative, hands-on engineering systems.The program offers students opportunities to engage in service learning and collaborate with engineering professionals and community organizations to develop solutions that address local community challenges. The projects undertaken by IEEE groups encompass student branches, sections, society chapters, and affinity groups including Women in Engineering and Young Professionals.EPICS in IEEE provides funding up to $10,000, along with resources and mentorship, for projects focused on four key areas of community improvement: education and outreach, environment, access and abilities, and human services.This year, EPICS partnered with four IEEE societies and the IEEE Standards Association on 23 of the 48 approved projects. The Antennas and Propagation Society supported three, the Industry Applications Society (IAS) funded nine, the Instrumentation and Measurement Society (IMS) sponsored five, the Robotics and Automation Society supported two, the Solid State Circuits Society (SSCS) provided funding for three, and the IEEE Standards Association sponsored one.The stories of the partner-funded projects demonstrate the impact and the effect the projects have on the students and their communities.Matoruco agroecological gardenThe IAS student branch at the Universidad Pontificia Bolivariana in Colombia worked on a project that involved water storage, automated irrigation, and waste management. The goal was to transform the Matoruco agroecological garden at the Institución Educativa Los Garzones into a more lively, sustainable, and accessible space. These EPICS in IEEE team members from the Universidad Pontificia Bolivariana in Colombia are configuring a radio communications network that will send data to an online dashboard showing the solar power usage, pump status, and soil moisture for the Matoruco agroecological garden at the Institución Educativa Los Garzones. EPICS in IEEEBy using an irrigation automation system, electric pump control, and soil moisture monitoring, the team aimed to show how engineering concepts combine academic knowledge and practical application. The initiative uses monocrystalline solar panels for power, a programmable logic controller to automatically manage pumps and valves, soil moisture sensors for real-time data, and a LoRa One network (a proprietary radio communication system based on spread spectrum modulation) to send data to an online dashboard showing solar power usage, pump status, and soil moisture.Los Garzones preuniversity students were taught about the irrigation system through hands-on projects, received training on organic waste management from university students, and participated in installation activities. The university team also organizes garden cleanup events to engage younger students with the community garden.“We seek to generate a true sense of belonging by offering students and faculty a gathering place for hands-on learning and shared responsibility,” says Rafael Gustavo Ramos Noriega, the team lead and fourth-year electronics engineering student. “By integrating technical knowledge with fun activities and training sessions, we empower the community to keep the garden alive and continue improving it.“This project has been an unmatched platform for preparing me for a professional career,” he added. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results. All of this reinforces my goal of dedicating myself to research and development in automation and embedded systems and contributing innovation in the agricultural and environmental sectors to help more communities and make my mark.”The project received $7,950 from IAS. Students give a tour of the systems they installed at the Matoruco agroecological garden. A smart braille systemMore than 1.5 million individuals in Pakistan are blind, including thousands of children who face barriers to accessing essential learning resources, according to the International Agency for the Prevention of Blindness. To address the need for accessible learning tools, a student team from the Mehran University of Engineering and Technology (MUET) and the IEEE Karachi Section created BrailleGenAI: Empowering Braille Learning With Edge AI and Voice Interaction.The interactive system for blind children combines edge artificial intelligence, generative AI, and embedded systems, says Kainat Fizzah Muhammad, a project leader and electrical engineering student at MUET. The system uses a camera to recognize tactile braille blocks and provide real-time audio feedback via text-to-speech technology. It includes gamified modules designed to support literacy, numeracy, logical reasoning, and voice recognition.The team partnered with the Hands Welfare Foundation, a nonprofit in Pakistan that focuses on inclusive education, disability empowerment, and community development. The team collaborated with the Ida Rieu School, part of the Ida Rieu Welfare Association, which serves the visually and hearing impaired.“These partnerships have been instrumental in helping us plan outreach activities, gather input from experts and caregivers, and prepare for usability testing across diverse environments,” says Attiya Baqai, a professor in the MUET electronic engineering department. Support from the Hands foundation ensured the solution was shaped by the real-world needs of the visually impaired community.SSCS provided $9,155 in funding. The student team shows how the smart braille system they developed works. Tackling air pollutionMacedonia’s capital, Skopje, is among Europe’s most polluted cities, particularly in winter, due to thick smog caused by temperature changes, according to the World Health Organization. The WHO reports that the city’s air contains particles that can cause health issues without early warning signs—known as silent killers.A team at Sts. Cyril and Methodius University created a system to measure and publicize local air pollution levels through its What We Breathe project. It aims to raise awareness and improve health outcomes, particularly among the city’s children.“Our goal is to provide people with information on current pollution levels so they can make informed decisions regarding their exposure and take protective measures,” says Andrej Ilievski, an IEEE student member majoring in computer hardware engineering and electronics. “We chose to focus on schools first because children’s lungs and immune systems are still developing, making them one of our population’s most vulnerable demographics.”The project involved 10 university students working with high schools, faculty, and the Society of Environmental Engineers of Macedonia to design and build a sensing and display tool that communicates via the Internet.“By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results.” —Rafael Gustavo Ramos Noriega“Our sensing unit detects particulate matter, temperature, and humidity,” says project leader Josif Kjosev, an electronics professor at the university. “It then transmits that data through a Wi-Fi connection to a public server every 5 minutes, while our display unit retrieves the data from the server.”“Since deploying the system,” Ilievski says, “everyone on the team has been enthusiastic about how well the project connects with their high school audience.”The team says it hopes students will continue to work on new versions of the devices and provide them to other interested schools in the area.“For most of my life, my academic success has been on paper,” Ilievski says. “But thanks to our EPICS in IEEE project, I finally have a real, physical object that I helped create.“We’re grateful for the opportunity to make this project a reality and be part of something bigger.”The project received $8,645 from the IMS. Society partnerships countThanks to partnerships with IEEE societies, EPICS can provide more opportunities to students around the world. The program also includes mentors from societies and travel grants for conferences, enhancing the student experience.The collaborations motivate students to apply technologies in the IEEE societies’ areas of interest to real-world problems, helping them improve their communities and fostering continued engagement with the society and IEEE.You can learn how to get involved with EPICS by visiting its website.

28.11.2025 19:00:02

Technologie a věda
13 dní

For years, Gwen Shaffer has been leading Long Beach, Calif. residents on “data walks,” pointing out public Wi-Fi routers, security cameras, smart water meters, and parking kiosks. The goal, according to the professor of journalism and public relations at California State University, Long Beach, was to learn how residents felt about the ways in which their city collected data on them.Gwen ShafferGwen Shaffer is a professor of journalism and public relations at California State University, Long Beach. She is the principal investigator on a National Science Foundation–funded project aimed at providing Long Beach residents with greater agency over the personal data their city collects.She also identified a critical gap in smart city design today: While cities may disclose how they collect data, they rarely offer ways to opt out. Shaffer spoke with IEEE Spectrum about the experience of leading data walks, and about her research team’s efforts to give citizens more control over the data collected by public technologies.What was the inspiration for your data walks?Gwen Shaffer: I began facilitating data walks in 2021. I was studying residents’ comfort levels with city-deployed technologies that collect personally identifiable information. My first career as a political reporter has influenced my research approach. I feel strongly about conducting applied rather than theoretical research. And I always go into a study with the goal of helping to solve a real-world challenge and inform policy.How did you organize the walks?Shaffer: We posted data privacy labels with a QR code that residents can scan and find out how their data are being used. Downtown, they’re in Spanish and English. In Cambodia Town, we did them in Khmer and English.What happened during the walks?Shaffer: I’ll give you one example. In a couple of the city-owned parking garages, there are automated license-plate readers at the entrance. So when I did the data walks, I talked to our participants about how they feel about those scanners. Because once they have your license plate, if you’ve parked for fewer than two hours, you can breeze right through. You don’t owe money.Responses were contextual and sometimes contradictory. There were residents who said, “Oh, yeah. That’s so convenient. It’s a time saver.” So I think that shows how residents are willing to make trade-offs. Intellectually, they hate the idea of the privacy violation, but they also love convenience.What surprised you most?Shaffer: One of the participants said, “When I go to the airport, I can opt out of the facial scan and still be able to get on the airplane. But if I want to participate in so many activities in the city and not have my data collected, there’s no option.”There was a cyberattack against the city in November 2023. Even though we didn’t have a prompt asking about it, people brought it up on their own in almost every focus group. One said, “I would never connect to public Wi-Fi, especially after the city of Long Beach’s site was hacked.”What is the app your team is developing?Shaffer: Residents want agency. So that’s what led my research team to connect with privacy engineers at Carnegie Mellon University, in Pittsburgh. Norman Sadeh and his team had developed what they called the IoT Assistant. So I told them about our project, and proposed adapting their app for city-deployed technologies. Our plan is to give residents the opportunity to exercise their rights under the California Consumer Privacy Act with this app. So they could say, “Passport Parking app, delete all the data you’ve already collected on me. And don’t collect any more in the future.”This article appears in the December 2025 print issue as “Gwen Shaffer.”

28.11.2025 13:00:02

Technologie a věda
14 dní

From the honey in your tea to the blood in your veins, materials all around you have a hidden talent. Some of these substances, when engineered in specific ways, can act as memristors—electrical components that can “remember” past states. Memristors are often used in chips that both perform computations and store data. They are devices that store data as particular levels of resistance. Today, they are constructed as a thin layer of titanium dioxide or similar dielectric material sandwiched between two metal electrodes. Applying enough voltage to the device causes tiny regions in the dielectric layer—where oxygen atoms are missing—to form filaments that bridge the electrodes or otherwise move in a way that makes the layer more conductive. Reversing the voltage undoes the process. Thus, the process essentially gives the memristor a memory of past electrical activity.Last month, while exploring the electrical properties of fungi, a group at The Ohio State University found first-hand that some organic memristors have benefits beyond those made with conventional materials. Not only can shiitake act as a memristor, for example, but it may be useful in aerospace or medical applications because the fungus demonstrates high levels of radiation resistance. The project “really mushroomed into something cool,” lead researcher John LaRocco says with a smirk.Researchers have learned that other unexpected materials may give memristors an edge. They may be more flexible than typical memristors or even biodegradable. Here’s how they’ve made memristors from strange materials, and the potential benefits these odd devices could bring:MushroomsLaRocco and his colleagues were searching for a proxy for brain circuitry to use in electrical stimulation research when they stumbled upon something interesting—shiitake mushrooms are capable of learning in a way that’s similar to memristors.The group set out to evaluate just how well shiitake can remember electrical states by first cultivating nine samples and curating optimal growing conditions, including feeding them a mix of farro, wheat, and hay.Once fully matured, the mushrooms were dried and rehydrated to a level that made them moderately conductive. In this state, the fungi’s structure includes conductive pathways that emulate the oxygen vacancies in commercial memristors. The scientists plugged them into circuits and put them through voltage, frequency, and memory tests. The result? Mushroom memristors.It may smell “kind of funny,” LaRocco says, but shiitake performs surprisingly well when compared to conventional memristors. Around 90 percent of the time, the fungus maintains ideal memristor-like behavior for signals up to 5.85 kilohertz. While traditional materials can function at frequencies orders of magnitude faster, these numbers are notable for biological materials, he says. What fungi lack in performance, they may make up for in other properties. For one, many mushrooms—including shiitake—are highly resistant to radiation and other environmental dangers. “They’re growing in logs in Fukushima and a lot of very rough parts of the world, so that’s one of the appeals,” LaRocco says.Shiitake are also an environmentally-friendly option that’s already commercialized. “They’re already cultured in large quantities,” LaRocco explains. “One could simply leverage existing logistics chains” if the industry wanted to commercialize mushroom memristors. The use cases for this product would be niche, he thinks, and would center around the radiation resistance that shiitake boasts. Mushroom GPUs are unlikely, LaRocco says, but he sees potential for aerospace and medical applications.HoneyIn 2022, engineers at Washington State University interested in green electronics set out to study if honey could serve as a good memristor. “Modern electronics generate 50 million tons of e-waste annually, with only about 20 percent recycled,” says Feng Zhao, who led the work and is now at Missouri University of Science and Technology. “Honey offers a biodegradable alternative.”The researchers first blended commercial honey with water and stored it in a vacuum to remove air bubbles. They then spread the mixture on a piece of copper, baked the whole stack at 90 °C for nine hours to stabilize it, and, finally, capped it with circular copper electrodes on top—completing the honey-based memristor sandwich.The resulting 2.5-micrometer-thick honey layer acted like oxide dielectric in conventional memristors: a place for conductive pathways to form and dissolve, changing resistance with voltage. In this setup, when voltage is applied, copper filaments extend through the honey.The honey-based memristor was able to switch from low to high resistance in 500 nanoseconds and back to low in 100 nanoseconds, which is comparable to speeds in some non-food-based memristive materials. One advantage of honey is that it’s “cheap and widely available, making it an attractive candidate for scalable fabrication,” Zhao says. It’s also “fully biodegradable and dissolves in water, showing zero toxic waste.” In the 2022 paper, though, the researchers note that for a honey-based device to be truly biodegradable, the copper components would need to be replaced with dissolvable metals. They suggest options like magnesium and tungsten, but also write that the performance of memristors made from these metals is still “under investigation.”BloodConsidering it a potential means of delivering healthcare, a group in India wondered if blood would make a good memristor in 2011, just three years after the first memristor was built.The experiments were pretty simple. The researchers filled a test tube with fresh, type O+ human blood and inserted two conducting wire probes. The wires were connected with a power supply, creating a complete circuit, and voltages of one, two, and three volts were applied in repeated steps. Then, to test the memristor-qualities of blood as it exists in the human body, the researchers set up a “flow mode” that applied voltage to the blood as it flowed from a tube at up to one drop per second.The experiments were preliminary and only measured current passing through the blood, but resistance could be set by applying voltage. Crucially, resistance changed by less than 10 percent in the 30 minute period after voltage was applied. In the International Journal of Medical Engineering and Informatics, the scientists wrote that, because of these observations, their contraption “looks like a human blood memristor.”They suggested that this knowledge could be useful in treating illness. Sick people may have ion imbalances in certain parts of their bodies—instead of prescribing medication, why not employ a circuit component made of human tissue to solve the problem? In recent years, blood-based memristors have been tested by other scientists as means to treat conditions ranging from high blood sugar to nearsightedness.

27.11.2025 15:00:01

Technologie a věda
14 dní

Early in Levi Unema’s career as an electrical engineer, he was presented with an unusual opportunity. While working on assembly lines at an automotive parts supplier in 2015, he got a surprise call from his high-school science teacher that set him off on an entirely new path: piloting underwater robots to explore the ocean’s deepest abysses.That call came from Harlan Kredit, a nationally renowned science teacher and board member of a Rhode Island-based nonprofit called the Global Foundation for Ocean Exploration (GFOE). The organization was looking for an electrical engineer to help design, build, and pilot remotely operated vehicles (ROVs) for the U.S. National Oceanic and Atmospheric Administration.Levi UnemaEmployerDeep Exploration SolutionsOccupationROV engineerEducation Bachelor’s degree in electrical engineering, Michigan Technological UniversityThis was an exciting break for Unema, a Washington state native who had grown up tinkering with electronics and exploring the outdoors. Unema joined the team in early 2016 and has since helped develop and operate deep-sea robots for scientific expeditions around the globe.The GFOE’s contract with NOAA expired in July, forcing the engineering team to disband. But soon after, Unema teamed up with four former colleagues to start their own ROV consultancy, called Deep Exploration Solutions, to continue the work he’s so passionate about.“I love the exploration and just seeing new things every day,” he says. “And the engineering challenges that go along with it are really exciting, because there’s a lot of pressure down there and a lot of technical problems to solve.”Nature and TechnologyUnema’s fascination with electronics started early. Growing up in Lynden, Wash., he took apart radios, modified headphones, and hacked together USB chargers from AA batteries. “I’ve always had to know how things work,” he says. He was also a Boy Scout, and much of his youth was spent hiking, camping, and snowboarding.That love of both technology and nature can be traced back, at least in part, to his parents—his father was a civil engineer, and his mother was a high-school biology teacher. But another major influence growing up was Kredit, the science teacher who went on to recruit him. (Kredit was also a colleague of Unema’s mother.)Kredit has won numerous awards for his work as an educator, including the Presidential Award for Excellence in Science Teaching in 2004. Like Unema, he also shares a love for the outdoors as Yellowstone National Park’s longest-serving park ranger. “He was an excellent science teacher, very inspiring,” says Unema.When Unema graduated high school in 2010, he decided to enroll at his father’s alma mater, Michigan Technological University, to study engineering. He was initially unsure what discipline to follow and signed up for the general engineering course, but he quickly settled on electrical engineering.A summer internship at a steel mill run by the multinational corporation ArcelorMittal introduced Unema to factory automation and assembly lines. After graduating in 2014 he took a job at Gentex Corp. in Zeeland, Mich., where he worked on manufacturing systems and industrial robotics.Diving Into Underwater RoboticsIn late 2015, he got the call from Kredit asking if he’d be interested in working on underwater robots for GFOE. The role involved not just engineering these systems, but also piloting them. Taking the plunge was a difficult choice, says Unema, as he’d just been promoted at Gentex. But the promise of travel combined with the novel engineering challenges made it too good an opportunity to turn down.Building technology that can withstand the crushing pressure at the bottom of the ocean is tough, he says, and you have to make trade-offs between weight, size, and cost. Everything has to be waterproof, and electronics have to be carefully isolated to prevent them from grounding on the ocean floor. Some components are pressure-tolerant, but most must be stored in pressurized titanium flasks, so the components must be extremely small to minimize the size of the metallic housing. Unema conducts predive checks from the Okeanos Explorer’s control room. Once the ROV is launched, scientists will watch the camera feeds and advise his team where to direct the vehicle.Art Howard“You’re working very closely with the mechanical engineer to fit the electronics in a really small space,” he says. “The smaller the cylinder is, the cheaper it is, but also the less mass on the vehicle. Every bit of mass means more buoyancy is required, so you want to keep things small, keep things light.”Communications are another challenge. The ROVs rely on several kilometers of cable containing just three single-mode optical fibers. “All the communication needs to come together and then go up one cable,” Unema says. “And every year new instruments consume more data.”He works exclusively on ROVs that are custom made for scientific research, which require smoother control and considerably more electronics and instrumentation than the heavier-duty vehicles used by the oil and gas industry. “The science ones are all hand-built, they’re all quirky,” he says.Unema’s role spans the full life cycle of an ROV’s design, construction, and operation. He primarily spends winters upgrading and maintaining vehicles and summers piloting them on expeditions. At GFOE, he mainly worked on two ROVs for NOAA called Deep Discoverer and Seirios, which operate from the ship Okeanos Explorer. But he has also piloted ROVs for other organizations over the years, including the Schmidt Ocean Institute and the Ocean Exploration Trust.Unema’s new consultancy, Deep Exploration Solutions, has been given a contract to do the winter maintenance on the NOAA ROVs, and the firm is now on the lookout for more ROV design and upgrade work, as well as piloting jobs.An Engineer’s Life at SeaOn expeditions, Unema is responsible for driving the robot. He follows instructions from a science team that watches the ROV’s video feed to identify things like corals, sponges, or deepwater creatures that they’d like to investigate in more detail. Sometimes he will also operate hydraulic arms to sample particularly interesting finds.In general, the missions are aimed at discovering new species and mapping the range of known ones, says Unema. “There’s a lot of the bottom of the ocean where we don’t know anything about it,” he says. “Basically every expedition there’s some new species.”This involves being at sea for weeks at a time. Unema says that life aboard ships can be challenging—many new crew members get seasick, and you spend almost a month living in close quarters with people you’ve often never met before. But he enjoys the opportunity to meet colleagues from a wide variety of backgrounds who are all deeply enthusiastic about the mission.“It’s like when you go to scout camp or summer camp,” he says. “You’re all meeting new people. Everyone’s really excited to be there. We don’t know what we’re going to find.”Unema also relishes the challenge of solving engineering problems with the limited resources available on the ship. “We’re going out to the middle of the Pacific,” he says. “Things break, and you’ve got to fix them with what you have out there.”If that sounds more exciting than daunting, and you’re interested in working with ROVs, Unema’s main advice is to talk to engineers in the field. It’s a small but friendly community, he says, so just do your research to see what opportunities are available. Some groups, such as the Ocean Exploration Trust, also operate internships for college students to help them get experience in the field.And Unema says there are very few careers quite like it. “I love it because I get to do all aspects of engineering—from idea to operations,” he says. “To be able to take something I worked on and use it in the field is really rewarding.”This article appears in the December 2025 print issue as “Levi Unema.”

27.11.2025 13:00:02

Zahrádkaření

Zprava i zleva

Zprava i zleva
1 den

President Donald Trump says he will be personally involved in the potential sale of Warner Bros. Discovery, with two enormous buyout offers on the table that risk further exacerbating U.S. media concentration. Netflix announced an $83 billion deal last week to buy Warner Bros. Discovery, which would give the tech giant control of the Warner Bros. movie studio and rival streaming service HBO Max. Paramount Skydance then launched a hostile takeover bid worth $108 billion that would create a Hollywood behemoth and bring CBS News and CNN under the same roof, in addition to a host of other media properties. Paramount Skydance is controlled by the pro-Trump billionaires Larry Ellison and his son David; the takeover offer is also backed financially by Trump’s son-in-law Jared Kushner, as well as the sovereign wealth funds of Saudi Arabia, Abu Dhabi and Qatar. Media critics and anti-monopoly advocates have warned that both offers for Warner Bros. should be rejected by federal regulators, though the Trump administration has largely ended aggressive antitrust enforcement. “We have these giant companies trying to take control of even more of what we watch, see, hear and read every day,” says Craig Aaron, the co-CEO of Free Press and Free Press Action, two media reform organizations. He calls the media giants’ efforts to woo Trump “a Mafia-type situation” and warns that previous media mega-mergers have been “disastrous” for workers, consumers and the businesses themselves.

09.12.2025 08:16:09

Zprava i zleva
5 dní

“One of the Most Troubling Things I’ve Seen”: Lawmakers React to U.S. “Double-Tap” Boat Strike, Pentagon Watchdog Finds Hegseth’s Use of Signal App “Created a Risk to Operational Security”, CNN Finds Israel Killed Palestinian Aid Seekers and Bulldozed Bodies into Shallow, Unmarked Graves, Ireland, Slovenia, Spain and the Netherlands to Boycott Eurovision over Israel’s Participation, Protesters Picket New Jersey Warehouse, Seeking to Block Arms Shipments to Israel, Supreme Court Allows Texas to Use Racially Gerrymandered Congressional Map Favoring Republicans, FBI Arrests Suspect for Allegedly Planting Pipe Bombs on Capitol Hill Ahead of Jan. 6 Insurrection, DOJ Asks Judge to Rejail Jan. 6 Rioter Pardoned by Trump, After Threats to Rep. Jamie Raskin, Grand Jury Refuses to Reindict Letitia James After Judge Throws Out First Indictment, Protesters Ejected from New Orleans City Council Meeting After Demanding ”ICE-Free Zones”, Honduran Presidential Candidate Nasralla Blames Trump’s Interference as Opponent Takes Lead, Trump Hosts Leaders of DRC and Rwanda in D.C. as U.S. Signs Bilateral Deals on Minerals, Trump Struggles to Stay Awake in Another Public Event, Adding to Speculation over His Health, Netflix Announces $72 Billion Deal to Buy Warner Bros. Discovery, 12 Arrested as Striking Starbucks Workers Hold Sit-In Protest at Empire State Building, Democratic Socialists Win Two Jersey City Council Seats in Groundbreaking Victories, Judge Sentences California Animal Rights Activist to 90 Days in Jail for Freeing Abused Chickens, National Parks Service Prioritizes Free Entry on Trump’s Birthday Over Juneteenth and MLK Holidays

05.12.2025 08:00:00

Zábava