myAltera Log In

Forgot Username or Password?

Don't have an account?

Is Tomorrow’s Embedded-Systems Programming Language Still C?

What is the best language in which to code your next project? If you are an embedded-system designer, that question has always been a bit silly. You will use, C—or if you are trying to impress management, C disguised as C++. Perhaps a few critical code fragments will be written in assembly language. But according to a recent study by the Barr Group, over 95 percent of embedded-system code today is written in C or C++.

And yet, the world is changing. New coders, new challenges, and new architectures are loosening C’s hold—some would say C’s cold, dead grip—on embedded software. According to one recent study the fastest-growing language for embedded computing is Python, and there are many more candidates in the race as well. These languages still make up a tiny minority of code. But increasingly, the programmer who clings to C/C++ risks sounding like the assembly-code expert of 20 years ago: their way generates faster, more compact, and more reliable code. So why change?

What would drive a skilled programmer to change? What languages could credibly become important in embedded systems? And most important, what issues would a new, multilingual world create? Figure 1 suggests one. Let us explore further.

Figure 1. An influx of new languages into embedded computing could lead to greater productivity, ora tower of Babel. 


A Wave of Immigration

One major driving force is the flow of programmers into the embedded world from other pursuits. The most obvious of these is the entry of recent graduates. Not long ago, a recent grad would have been introduced to programming in a C course, and would have done most of her projects in C or C++. Not any more. “Now, the majority of computer science curricula use Python as their introductory language,” observes Intel software engineering manager David
Stewart. It is possible to graduate in Computer Science with significant experience in Python, Ruby, and several scripting languages but without ever having used C in a serious way.

Other influences are growing as well. Use of Android as a platform for connected or user-friendly embedded designs opened the door to Android’s native language, Java. At the other extreme on the complexity scale, hobby developers migrating in through robotics, drones, or similar small projects often come from an Arduino or Raspberry-Pi background. Their experience may be in highly compact, simple program-generator environments or small-footprint languages like B#.

The pervasiveness of talk about the Internet of Things (IoT) is also having an influence, bringing Web developers into the conversation. If the external interface of an embedded system is a RESTful Web presence, they ask, shouldn’t the programming language be JavaScript, or its server-side relative Node.js? Before snickering, C enthusiasts should observe that node.js, a scalable platform heavily used by the likes of PayPal and Walmart in enterprise-scale development, has the fastest-growing ecosystem of any programming language, according to the tracking site

The momentum for a choice like Node.js is partly cultural, but also architectural. IoT thinking distributes an embedded system’s tasks between the client side—attached to the real world, and often employing minimal hardware—and, across the Internet, the server side. It is natural for the client side to look like a Web app supported by a hardware-specific library, and for the server side to look like a server app. Thus to a Web programmer, an IoT system looks like an obvious use for JavaScript and Node.js.

The growing complexity of embedded algorithms is another force for change. As simple control-loops give way to Kalman filters, neural networks, and model-based control, high-performance computing languages—here comes Python again, but also languages like Open Computing Language (OpenCL™)—and model-based environments like MATLAB are gaining secure footholds.

Strong Motivations

So why don’t these new folks come in, sit down, and learn C? “The real motivation is developer productivity,” Stewart says. Critics of C have long argued that the language is slow to write, error-prone, subject to unexpected hardware dependencies, and often indecipherable to anyone but the original coder. None of these attributes is a productivity feature, and all militate against the greatest source of productivity gains, design reuse.

In contrast, many more recent languages take steps to promote both rapid learning and effective code reuse. While nearly all languages today owe at lease something to C’s highly-compressed syntax, now the emphasis has swung back to readability rather than minimum character count. And in-line documentation—long seen by C programmers as an example of free speech in which the author is protected against self-incrimination—is in modern languages not only encouraged but often defined by structural conventions. This discipline allows, for example, utility programs that can generate a user’s manual from the structured comments in a Python module.

The modern languages also incorporate higher-level data structures. While it is certainly possible to create any object you want in C++ and to reuse it—if you can remember the clever things you did with the pointers—Python for example provides native List and Dictionary data types. Other languages, such as Ruby, are fundamentally object-oriented, allowing structure and reuse to infuse themselves into programmers’ habits.

Two other important attributes work for ease of reuse in modern languages. One, the more controversial, is dynamic typing. When you use a variable, the interpreter—virtually all these server-side languages are interpreted rather than compiled—determines the current data type of the value you pass to the expression. Then the interpreter selects the appropriate operation for evaluating the expression with that type of data. This relieves the programmer of worry over whether the function he wants to call expects integer or real arguments. But embedded programmers and code reliability experts are quick to point out that dynamic typing is inherently inefficient at run-time and can lead to genuinely weird consequences—intentional or otherwise.

The other attribute is a prejudice toward modularity. It is sometimes said that Python programming is actually not programming at all, but scripting: stringing together calls to functions written in C by someone else.

These attributes—readability, in-line documentation, dynamic typing, and heavy reuse of functions, have catalyzed an explosion of ecosystems in the open-source world. Programmers instinctively look in giant open-source libraries such as npm (for Node.js), PyPI (for Python), or (for Ruby) to find functions they can use. If they have to modify a module or write a new one, they put their work back into the library. As a result, the libraries thrive: npm currently holds about a quarter-million modules. These massive ecosystems, in turn, appear to profoundly increase the productivity of programmers.

The Downside

With so many benefits, there have to be issues. And the new languages contending for space in embedded computing offer many. There are lots of good reasons why change hasn’t swept the industry yet.

The most obvious problem with most of these languages is that they are interpreted, not compiled. That means a substantial run-time package, including the interpreter itself, its working storage, overhead for dynamic typing, run-time libraries, and so on, has to fit in the embedded system. In principle all this can be quite compact: some Java virtual machines fit into tens of kilobytes. But Node.js, Python, and similar languages from the server side need their space. A Python virtual machine not stripped down below the level of real compatibility is likely to consume several megabytes, before you add your code.

Then there is the matter of performance. Interpreters read each line of code—either the source or pre-digested intermediate-level code—parse it, do run-time checks, and call routines that execute the required operations. This can lead to a lot of activity for a line of code that in C might compile into a couple of machine-language instructions. There will be costs in execution time and energy consumption.

Run-time efficiency is not an impossible obstacle, though. One way to improve it is to use a just-in-time (JiT) compiler. As the name implies, a JiT compiler works in parallel with the interpreter, generating compiled machine instructions for code inside loops, so subsequent traversals will execute faster. “JiT techniques are very interesting,” Stewart says. “The PyPy JiT compiler seems to speed up Python execution by a factor of about two.”

In addition, many functions called by the programs were originally written in C, and are called through a foreign function interface. Heavily-used functions may run at compiled-C speed for the simple reason that they are compiled C code.

There are other ideas being explored as well. For example, if functions are non-blocking or use a signaling mechanism, a program rich in function calls can also be rich in threads, even before applying techniques like loop-unrolling to create more threads. Thus there is the opportunity to apply many multithread cores to a single module—a direction already well-explored in high-performance computing. Going a step further, Ruby permits multithreading within the language itself, so it can produce threaded code even if the underlying operating system doesn’t support threads. And some teams are looking at implementing libraries or modules in hardware accelerators, such as graphic processing units (GPUs), the Xeon Phi, or FPGAs. In fact, the interpreter itself may have tasks suitable for acceleration.

Another difficulty with languages from the server side is the absence of constructs for dealing with the real world. There is no provision for real-time deadlines, or for I/O beyond network and storage in server environments. This shortcoming gets addressed in several ways.

Most obviously, the Android environment encapsulates Java code in an almost hardware-independent abstraction: a virtual machine with graphics, touch screens, audio and video, multiple networks, and physical sensors. For a lighter-weight platform with more emphasis on physical I/O and with the ability to run on even microcontrollers, there is Embedded Java.

Languages like Python require a different approach. Since the CPython interpreter runs on Linux, it can in principle be run on any embedded Linux system with sufficient speed and physical memory. There have been efforts to adapt CPython further by reducing load-time overhead, providing functions for physical I/O access and for using hardware accelerators, and adapting the run-time system to real-time constraints. One example of the latter would be the bare-metal Micro Python environment for STM32 microcontrollers. Implausible as it might seem, similar efforts are underway with the JavaScript engine underneath Node.js.

Security presents further problems. Many safety and reliability standards discourage or ban use of open-source code that has not been formally proven or exhaustively tested. Such restrictions can make module reuse impossible, or so complex as to be only marginally productive. The same level of scrutiny would extend to open-source environments such as the virtual machines. An open-source platform like CPython would at the very least raise red flags all over the reliability-and-safety community.

Finally, given the multitude of driving forces bringing new languages into the embedded world, it is not hard to envision multilingual systems with modules from a number of source languages, each chosen for its access to key libraries or its convenience to a particular developer. Of course you could host several virtual machines on different CPU cores, or unified under a hypervisor on one core, and then you could provide conventions for inter-task messaging through function calls. But the resulting system might be huge.

Another possibility might be a set of language-specific interpreters producing common intermediate code for a JiT compiler (Figure 2). Certainly there would still be issues to sort out—such as different inter-task communications models, memory models, and debug environments–but the overhead should be manageable.

Figure 2. A single run-time environment could handle a variety of interpreted languages in an embedded system


If these things are coming, what is a skilled embedded programmer to do? You could explore languages from the Web-programming, server, or even hobbyist worlds. You could try developing a module in both C++ and an interpreted language on your next project. There would be learning-curve time, but the extra effort might count as independent parallel development—a best practice for reliability.

Or you could observe that today the vast majority of embedded code is still written in C. And you are already a skilled C programmer. And C produces better code. All these statements are true. But substitute “assembly language” for “C”, and you have exactly what a previous, now extinct, generation of programmers said, about 20 years ago.

Still, history wouldn’t repeat itself. Would it?



CATEGORIES : Embedded system, IoT/ AUTHOR : Ron Wilson

35 comments to “Is Tomorrow’s Embedded-Systems Programming Language Still C?”

You can leave a reply or Trackback this post.
  1. Hands down…C will dominate the Embedded Systems world for indefinitely long time. There is not a single other programming language comparable to C for Embedded Systems in the areas of developer base, expertise available, code base available, performance, suitability for IoT, succinct programming etc. Not a single reason is shown which can jeopardize the position of C language in the Embedded world.

  2. The way I see it, is that the C language and recently C++ are only growing deeper roots in Embedded Systems world. Big manufacturers invest engineering time to work on the GCC compiler to support their devices in the best way possible and to keep up with the competition. Today is a world of rapid development cycles and only a few can afford to tie up potential developers in expensive toolchains, IDEs only supported on Windows, support only through a NDA, or evaluation boards costing a pretty penny.

    As for the developers only familiar with Pyhton, they are at the disadvantage of not being able to compete in this market. It is more of a thing that “old” C programmers should learn new tricks, rather than anything else. When we take into account reliability and safety requirements necessary in the industry, it is obvious that people developing in Python on e.g. RaspberryPi cannot compete as embedded system developers. It seems that management is looking to source cheaper work force, imagine paying salary of a web developer to an engineer for doing embedded systems development, even if they are going to bodge a RPi in there somewhere, but such people have always been present and it is certain that they don’t survive for long (i.e. management and company).
    My suggestion for the “old” guild is to refresh Math, Statistics, Probability, Physics, Algorithms, OOP Paradigm… and on the other side of the spectrum Psychology, Nutrition and Health. Then (when), compilers go through an evolution we should be good 🙂

  3. One note regarding out-of-the-box, reusable data structures implementations: This is not a problem for C++ (compared to C) — given the mention of Python providing List and Dictionary data types, the equivalents (called “containers”) are std::list, std::unordered_map, and std::map. Any hardware environment that allows to use Python would definitely allow to use C++. At the same time, you can customize C++ containers so that they do not require run-time heap allocation (a feat not possible in Python, at least AFAIK) and operate within a constrained (bounded known at compile time) stack region. This is of significant importance in memory-constrained environments, like microcontrollers with kilobytes of RAM, which can play a role in determining suitability of a given programming language for these tasks.

  4. Seriously?
    Not much has changed from the late 70’s. The only place C was used at my University was if you took a Grad course that needed access to the PDP-10 and later the VAX.
    Otherwise, we had our choices, depending on the course of ALGOL-W, ALGOL-68, FORTRAN, COBOL. Additionally the computing science had some specialized courses that required assignments to be written in LISP, APL, SNOBOL, UBC-PASCAL and of course IBM-370 Assembler.
    Since PCs didn’t exist yet University still targeted their courses to the job environment.
    Before sweeping statements about Python I believe it’s important to look at sales of micro-controllers. At one point the largest selling controller was the venerable MC68HC05 and code was assembler. Now it’s either one of the PIC families or possibly one of the smaller ATMELs. In both cases a code space of 16K to 64K and 4K RAM doesn’t come close to what is needed. And since the processor is but a tiny part of any embedded project I think I can safely predict Python will never surpass C as the embedded system language of choice.

  5. Regarding parallel/multicore execution: clock speeds have plateaued so the only way to get faster performance on a multicore processor is through parallel execution, usually via multithreading. But the standard Python and Ruby implementations contain a “global interpreter lock” that prevents more than one thread from running at once, even on a multicore processor. Because of this, a multithreaded program in these languages actually runs much slower than its single-threaded equivalent because of the added overhead of launching and context-switching between the threads. There are implementations of Python and Ruby that do not have this limitation (e.g., those built on the Java virtual machine), but you have to go out of your way to use them. C and C++ do not have this problem — they allow you to write code whose performance scales with the number of cores in your processor. Also, C++11 now has threads built into the language standard, supported by gcc/g++.

  6. In 50 years of scientific and industrial programming, C is the worst, most buggy, and time-consuming language I’ve encountered. Any new languages that speed development are most welcome!

  7. I work in a biology lab that builds a lot of our own equipment for automating data collection, which means writing a lot of C code for microcontrollers (often in the Arduino IDE), but whenever we can squeeze an operating system into the embedded hardware, we only write a thin C-wrapper library that exposes the hardware to a higher level programming language (typically Java and Python) and then use that higher level language to implement the control logic. When performance becomes critical, we just add an FPGA (programmed in Verilog) instead of trying to optimize our C code. Thus in my lab, embedded C is under attack from both higher-level and lower-level directions.

    Most natural science labs do not presently build much of their own hardware, but big-data science is increasingly driving people with no electronics or computer science background into embedded systems and automation. These people need to move terabytes of data from home-made sensors to large cloud data bases. They are the embedded programmers of tomorrow, and if nothing replaces C, then this influx of people will change embedded C conventions to look more like R or Python or JavaScript.

  8. Assembly needed to be replaced…C++ not so much. I have no doubt that the embedded language world will become more diverse, but until something else can answer the fundamental size and speed issues while offering greater utility, C/C++ will continue to outperform in a smaller space. It could be argued that increasing memory size and processor performance will render the size and space issue moot, and it may in time, but for the foreseeable future I think C/C++ remains.

    And I’m not convinced that OO-centric languages with garbage collection and magic containers are making better developers. In the same way that desktop languages like VB raised bad code to an art form, my experience with newly graduated programmers who learned “productive” languages is that they are sloppy and lack attention to detail. When applied to cell phones or web pages, such bad practices are a mere annoyance, but when applied to critical infrastructure, such practices can kill thousands.

    Enough time wasted. Back to figuring out how to cram 150K worth of code into the 43K space available in the on chip RAM tied to my NIOS2 coprocessor. I’m thinking C/C++ may be the best choice…

  9. Well, I started programming at university at the end of seventies with Fortran and Pascal. When I moved to Pascal I was so glad that the the indentation rubish had stopped. For years now I have been programming in C/C++ and HDL languages. It will take a lot to convince me to move.

    I don’t think I have ever sat in a Python tutorial or demo etc. where the speaker doesn’t eventually start scratching his head because his program doesn’t work and then coming to the conclusion “Oh, I think I have used tabs instead of spaces somewhere” (or is it the other way around). Its really a joke. I just don’t understand what these guys have against brackets or begin/end. Are they afraid of filling up the disk or something?

    Python might have an interesting class/object model but I don’t see C++, Java incrTcl or object-Perl as being inferior, just different.

    The criticism on C/C++ readability is of course true. But I guess there are two types of programmers, the ones who want to show how great they are and in what cryptic constructs they are proficient and the ones who can code in a reusable and maintainable way.

  10. Application of higher level languages such as Python probably is driven hardware developments. When in an embedded system we can afford hundreds of megabytes of memory and a sufficient fast processor I would like to use Python — especially when it comes to setting up the internal web site & REST services to handle external communication. On the other hand when we are power and memory starved implementing your stuff in C makes much more sense.

    In this light it interesting to see that mobile apps for a large part run on compiled code (such as Java in the case of Android) but often also large parts are written as HTML/JavaScript and subsequently rendered in interpreters. So it seems that kind of hardware and power is just on the edge. BTW: not very much C in the mobile arena.

  11. Assembly came under pressure not only because it’s uncomfortably near the metal to work with, but because it’s inherently nonportable between vendor cpu products.

    Because C is hugely portable and just far enough away from the metal, it’s compact and fire-and-forget compared to JIT and other runtime burdens, it’s going to be the language of choice for decades to come…

  12. Developers need to learn C, if you want real-time or low overhead another language won’t cut it. You can generate C code that is almost 1:1 to assembly, and you might be able to do this with FORTRAN. If you want optimization, you need to be able to do that by hand. Sure, the upper level functions could be handled by an interpretative language. There are plenty of problems in the industry that have been created by lazy developers that only know interpretative languages.

  13. Great discussion, I think Andy is spot on. The assembly guys went extinct (used to do a lot of it myself) because assembly code is not portable, each new machine required a huge investment to learn the machine and its assembler syntax, and even then productivity was poor. Anyone remember x86 vs 68K assembly?

    At university, EE’s had to take a programming class, most of us chose FORTRAN but never used it much((maybe modeling a BJT or something). Then along came C, and the world changed. Stroustrup fixed most of the shortcomings with C++. Since those days of yore, I’ve taken courses to learn Perl, Java, C#, Scheme/Racket, Python, Haskell, and now R. Great languages all, each serve a good useful purpose. But for my money in the embedded space, by far the easiest way to get ‘er done quickly in a tight space with minimal headache is with a decent C/C++ compiler. Until micros ship with OS’s, it’s unlikely this will change for at least another decade or so.

  14. I’ll be more interested when a heterogenous model of computation develops in which a mix of compiled, interpreted, and functional routines can be seamlessly integrated into an efficient execution environment. C certainly has major strengths of interest to embedded systems design — tight fast code, efficient memory usage, and decades of experienced programmers, primarily. Interpreted and semi-interpreted languages like Python, Java, and C# have their advantages for cleaner, more bug-free management of memory, but often using heap-based programming on SoC systems with limited memory can be problematic. Functional and abstract logic environments like Lisp, Mathematica, and Prolog have great strengths for solving the kinds of problems I deal with — which tend to be combinatoric in nature or require major amounts of parallel computation.

    Rather than incremental changes from one language to another, I’ll be more interested when this disparate environments can cooperate on a task, rather than require complete reimplementation and porting if you want to change from one environment to another. WHY NOT insert a bit of Prolog code into a fast executing C program when it defines a logical system you need to solve for? That’s what I want to use, for big systems, small systems, and FPGA development.

  15. Interesting “pot stirrer” here. The goal seems to be to press for a common language and environment for software/firmware development. There’s nothing really new here. The results should be predictable, but apparently are not because the idea surfaces again and again in software engineering.

    Each product brings with it it’s own set of development constraints. The constraints include time-to-market, security, reliability, cost of development, impact on available system resources and management’s ego, among others. Every project will have these constraints and others. How important each constraint is varies from project to project.

    For example, a web-based application may include high emphasis on short time-to-market, low cost of development and good marks from management’s ego while being less concerned about security, reliability and impact on available system resources. On the other hand, an aerospace application may have a high emphasis on reliability, impact on available (possibly very small) system resources and security while being less concerned about time to market, cost of development and management’s ego. Depending on the nature of any particular application and it’s environment, your mileage may vary…

    In days of yore, the essential motivation for the COBOL programming language was the ability to program in a common language, English. One of the main justifications was that non-engineers could read and understand it. The ability to write COBOL would require only a minimal amount of training and to begin, practitioners need only be fluent in English. This promised to provide a much larger supply of coders, which would result in much lower labor costs.

    Of course, the argument is fallacious because English, like every other natural language, is loaded with ambiguity while a programming language needs to be very precise. This is why “English-like” programming languages have ultimately failed.

    Another noteworthy attempt at a standard language was Ada. With the US Department of Defense pushing it and the best software engineering talent at the time designing it, one would think it had the best chance to become THE standard. But where is it now? The six month to one year learning curve, including the programmers pay, was a strong disincentive to use Ada. Also, Ada tended to be too divorced from the underlying hardware to easily exploit any hardware strengths. For some projects, that’s a good thing. In others, it masks inadequacies of the target hardware until very late in the project when they are a lot more expen$ive to correct.

    In short, while there are good reasons to desire a standard development language, the essential complexity of the real world has to date overwhelmed all attempts to establish such a standard. Proposed standards (i.e. Ada) that might have had had the flexibility to deal with reality have failed because they themselves become too complex for many projects. At this point in history, there is no substitute for an engineer who can recognize the constraints imposed on a project for what they are and has a sufficiently broad knowledge of the available development systems to weigh the trade-offs and make an appropriate choice.

  16. Is Tomorrow’s Embedded-Systems Programming Language Still C?

    The answer to this more complex that seems. Almost two decades ago there was a recognition that C was suffering from language bloat and there was a push by some of the members of WG-14 the body charged with the responsibility of officially defining the C language to define a minimum language set. It was an effort that failed.

    The next embedded systems language is not going to be the many interpreted languages that were mentioned but a new language that will reflect changes in computing as we slowly move from the serial nature of applications to massively parallel systems doing relatively ordinary things. A new language that is designed to be both flexible and clear for programmers to communicate application requirements to code generation tools.

    In the end C’s downfall will probably be not the language but the tools that translate it most of which is based on very old compiler technology some of which is reaching the better part of half a century.


  17. Choices for embedded development here remain
    1) C++ if it can meet system and application requirements
    2) C if C++ does not meet those requirements
    3) No automatic garbage collection
    4) Use assembly only if needed for execution speed, if needed, consider faster and different microcontrollers or switch to Verilog in FPGA
    5) Avoid languages like Java and Python

    For Windows programming, C# is the choice if it can meet application requirements. I agree with previous comments, programmers can become sloppy with these programmer-friendly languages. In addition, seems many programmers out there are uneducated in computer science, which leads to more problems, care must still be taken to program correctly.

    The SDLC is worth following, gathering requirements, creating designs, coding, testing, etc. Code must be written well, properly commented, be readable, properly indented, follow structured programming techniques, OO, etc., etc.

  18. The modern versions of Fortran may be a very nice alternative to writing the code entirely in C. The new syntax looks at lot like (and sometimes identical to) Matlab which makes the code is much easier to write and read compared to the same code in C. Modern Fortran is not appropriate for most tasks but it could be very useful for more computationally heavy tasks such as signal processing or neural networks. Integrating C and Fortran might be a great way to go for some applications.

  19. Having started with FORTRAN, Algol, COBOL, and Plan/GIN (ICL 1900 assemblers) in the ’60s, my eventual shift (via other flavour-of-the-month languages) to Pascal on PCs speeded development – and I still use Delphi if I need to write a PC program. When others were shifting to C, I was using Z8, 6502, Z80, H8 and AVR assembly. The certifying people took my assembly listings and confirmed that each location in the OTP ROM matched, as they would not trust the assembler program. There was no hope of getting a compiler trusted if you were sufficiently paranoid (about security or getting sued). I would hope that medical devices (eg pacemakers) go through a similar process.

  20. Yep, that’s what we need a bunch of “web developers” getting into the embedded world. The interwebs is a cesspool of hodgepodge hacks in large part due to the immature “coding” base that takes rotten design from w3c and keeps propagating polished turds. Nobody with a straight face can say that the javascript/xml/css “stack” isn’t a disaster – X11 over dial-up is more secure and efficient, which is sad.

    Occams razor should be engineering rule #1 – simplest is best. A subtle call to have more language diversity to program hardware is really a desperate ploy to increase sales guided by poor judgement and a huge misconception about EE and embedded systems development. If hardware vendors wanted to sell more silicon, they should stop making proprietary bitstream formats and tools as the only way to get at their devices, then everyone can come to the party – let the best paradigm win. BE HARDWARE VENDORS – DEVELOP THE BEST HARDWARE. Instead most vendors choose to offer mediocrity by being pimps of IP/badly written software (the development environment for programmable logic is horrific).

  21. Under the hood of OOP, JIT, Python, and Java there is an intermediate language in the form of byte code.and a stack machine emulated by a RISC.

    OOP encapsulates data so local variables and parameters are pushed onto the stack at runtime. It seems that streaming data onto the stack in a managed way rather than passing thru cache a line at a time would make sense.

    Most byte codes are simple and can be implemented in hardware. Some are complex and are better emulated in SW. Sun has a PDF describing a hardware implementation to use in place of interpretation. That seems worth investigation.

    There is also CIL which is a standard byte code adoption os MSIL. In short, a stack based hardware design can be done to solve a lot of the problem.

  22. I quit from a embedded IoT job last month because the management had dictated using node.js.

    I found it to be a disgustingly-horrible language to program in, and to actually discourage inline documentation, readability and good design.

    In my experience, well-written C is both “readable” and “maintainable” … but it is definitely getting to the point where something with a few more-modern language features would be welcome in order to promote productivity with the complexity of modern systems.

    C++ has grown to be an ugly mess (IMHO), with so many obscure edge-cases that I find it difficulty to recommend to anyone anymore.

    I’m currently looking at Goole’s GO language as the closest thing that I’ve seen in ages that would count as an “improved” replacement for C, but yet still-offer the readability and maintainability that I desire.

    Python … nope.

  23. good ,a restful server in embedded systems is a better way . I somtimes use Python on ARM Processors, or Golang.

  24. pyton – high performance language, lol. To me it seems that the author doesn’t know jack s–t what is talking about. In what serious commercial embedded project python is used as the embedded dev language? Python is popular, easy to use, and is sort of part of (a very small part) of complex embedded build chains that include many other languages. Python and performance doesn’t go in the same sentence, while easy to use, one can build easily impressive applications, it is not performant in any way, inefficient, buggy, idiotic language constructs, wasting just about every resource, and in embedded systems where resources are limited, system stability is the upmost importance, python is a no go.
    Anyway, it is not like one has to choose between C/C++ and python, normally many languages and script are part of a build chain. Some of the C/C++ code is hand written, other generated by higher level languages, but in the end the code that runs on embedded systems is compiled C/C++ code and handwritten assembly code.
    JIT may seem like a super exciting feature for the layman, but it is not a new technology and it is certainly not exclusive for python, jit is used also in C/C++ in some cases when performance is critical e.g. video codec.
    To see how performant python is, try to implement an audio or even better a video codec, or anything of that sort, when you don’t have prebuilt optimized libraries, you have to do everything from scratch. (i.e. you invent a new codec)
    In conclusion, C/C++ is the language of the embedded, assembly is still very relevant for startup and performance critical stuff, python…. python is a joke.

  25. B# seems to still be a proprietary language which has made little traction since 2006. Programmers and especially embedded programmers are wary of adopting proprietary languages.

    It would be nice if the embedded community had more choices like D or a new language (that compiled into C code).

    Python is nice but very slow and resource intensive.

  26. Back in the day the word was that C was the only language assembly programmers would agree to switch to. 🙂

    Python & kith are bad partly because of speed, but primarily because of poor compiletime typechecking — in embedded we need all the compiler-based bug detection help we can get. We need to be moving toward more use of automatioh to kill bugs, not less. Let me put in a plug at this point for the CompCert C compiler by the redoubtable Xavier Leroy, which is provably 100% bugfree (established by mechanically checked proofs, also verified by testing) putting it at least a decade ahead of its time:

    Large embedded systems which need improved reliability and productivity should look toward modern compiled languages with extensive typechecking, garbage collection, Hindley-Milner-Damas type inference etc, not to toy languages like Python. E.g., Mythryl (which I happen to have developed :-)) offers much of the convenience of scripting languages with all the speed and typesafety of a modern compiled language, plus the productivity of modern higher-order functional programming support.

    If you want to get a glimpse of where we’re headed, I recommend Adam Chlipala’s “Certified Programming with Dependent Types” ( Safety-oriented markets like automotive are not going to be satisfied much longer with “well, we ran a few tests and nothing crashed”: Sooner rather than later they are going to be demanding mechanically verified certifications of various safety properties, and Chlipala’s work demonstrates the sorts of tools and techniques we are going to have to master to meet these upcoming market demands.

    Chlipala’s work grows out of Inria (think “French National Science Foundation”) work funded, inspired and used by European aerospace. This stuff is no longer pie in the sky: It is happening, the train is leaving the station and we need to be on board.

  27. An app guy above wrote: “BTW: not very much C in the mobile arena.”
    It appears that way to app/script guys, but the C based OS like Linux and C based RTOS + silicon still make modern computers as powerful as they are today, while the mobile apps are mainly java and its JIT compiler in Android Linux or Javascripts, or objective C in IOS Linux, most all modern touchscreen mobile devices are built on operating systems made in C for performance, code size, device driver I/O, and available middleware platforms. Notice this most successful mobile computing paradigm is efficient operating system code in C near the hardware, upon which is built hardware abstraction layers (HAL) and middleware platforms like Android, file systems, and other system API’s for enhanced portability, ease of use, and app development speed. First think of a processor class and available memory and peak performance, then an appropriate programming model (bare metal API, RTOS or full OS like Linux on an apps processor) and associated API sets. Yes the lowest cost and most numerous processors (8 bit MCU systems) will continue to be memory constrained and must use C (or asm) but as cost of silicon drops more, they all become 32 bit or higher systems (see cheap fast efficient ARM Cortex M series MCUs everywhere) and in the near future to boost both code portability and performance, there will likely be popular hardware accelerated implementations of javascript and python interpreters to open up the IOT to many more web and app developers, as the Altera system architect author states. For an example see this Python to HDL tool: MyHDL. ARM’s embed platform is also powerful as is abstracts a more standard OS like environment to full set of C++ APIs for Cortex M 32b MCUs. As memory on these chips grows, the soft hooks and HW accelerated interpreters and JIT compilers can’t be far behind.


  28. Model-based design in MATLAB/Simulink gets around some of the issues mentioned in the article. It generates quite efficient C code that in my experience rarely if ever needs hand-optimizations. While the generated C code is not very readable, it traces back to the model on the PC host. The model, be it the Simulink model or Embedded MATLAB code (or both), allows the developer to work on the proper level of abstraction and verify the logic on the PC host through simulation. Generated code is guaranteed to be bit-accurate to the simulation. We have been using this approach at my company very successfully for over 8 years to implement what would now be considered to be IoT algorithms or “edge analytics”. Of course, we still need handwritten C for more traditional processing and I/O operations, but this approach allowed us to avoid the use of any interpreters in our real-time environment in order to implement the “IoT functionality”.

  29. Interresting article, my answer would be yes. Definitely.

    From what I see, the new generation of programmers, the ones that tipically use languages like python, don’t have fundamental knowledge. They can do ‘quick and dirty’ things, like patching together some libraries found on the net and build an application which kind of works. But when it comes to embedded, they don’t have the slightest idea how the HW works, how the compiler works, how to design big applications (hundreds of megabytes of source code) how to debug errors, nothing. It is a completely new world for them.

    I also see that the general feeling in the new generation of programmers is that C is absoleete, outdated, dead, why anyone would want to learn C, not to mention C++. I think that this is because most programmers work on application SW (like web servers and databases/financial applications, other big gui framework applications). In embedded, C/C++ is the language of choice (with the occasional assembly), there is no better candidate, one has to deal with a lot of low level stuff, like interrupts, DMAa, RTOS, caches, stacks, heaps, performance issues, latencies, boot time, security, safety, HW issues (signal integrity) … How can one deal with all that from python?

    How are the assembly-code experts from 20 years ago, now extinct? (At the same time I wonder, where will be the python code experts 20 years from now?) I say assembly is not dead, there are still lots of fields where assembly is used, here are a few, I still have contact with assembly on almost daily basis:
    – OS/RTOS, could you imagine a RTOS with no assembly
    – Multimedia/graphics firmware, theese contain a lot of assembly code
    – Safety critical systems (severely life threatening or fatal injury in case of malfunction), assemly only
    – Very small mcu systems, or ones that don’t have a C compiler
    – Your own custom FPGA processor
    – High performance
    – Debugging embedded systems, many time you have to do assembly level debug (instruction trace)

    As the and user would you prefer an airbag controller that has firmware written in assembly,that works everytime the same way, has fixed determined runtime, same code execution path, or one with python firmware, wich kinda works ok too?

  30. When I first saw Python, I immediately rejected it because of its spacing being an implicit part of the program. That was a train wreck waiting to happen. So I rejected it and never looked at it again (and I still think I’m right as one commenter makes this very point … with instructors getting caught in that trap as they teach). It also can lead to working programs and code snipits that quit working after transfer in a text oriented fashion … like in an email or text message where spacing may get changed.

    My programming experience is mainly with APL. It allowed software sculpture. You tracked the problem rather than trapping it. Later, as APL didn’t get adopted anywhere, I moved to C and then to C++. My problem with those languages is that you needed to totally master them before you could really use them. If you didn’t, you would spend endless hours tracing down bugs caused by misunderstanding (not actual logic errors).

    Finally, I wrote my own APL which I called GLEE (see I wrote it in C++. Initially I hated C++ but as I mastered it, I came to really love it. My code contains almost no comments. If I can’t read the code and understand what it’s doing, I rewrite until I can. I adopt a naming and style that are self documenting. Looking back at my now 15 year old GLEE code, I can still easily read, understand, and change it. And the whole “interpreter” is less that 500K, which I used to think was enormous. Problems I had with self documenting programs (where a utility strips out the comments into documentation) is that there is always a comment that doesn’t get updated with a programming change. And programming changes take on commenting change inertia.

    A language that looks very interesting to me (though eludes my mastery so far) is FORTH. With a tiny nucleus of common code, you can pull yourself up by your bootstraps on most controllers and computers. Once there, you can “poke” the device (interactively) to learn how it behaves. You can write very efficient code. And you have a built in debugging tool. FORTH is my current focus … and it’s not even mentioned.

    Regarding Java … that’s C++ light with training wheels. Regarding JavaScript … that looks like a pretty kludgy scripting language hack to me. I would discourage picking a language just because its easy to learn. You’re better off going to the trouble to master hard to learn (but elegant) languages if you’re a career coder. You’ll find it less frustrating in the end.

    Summarizing: Whatever you use, master it before trying to apply it … or be ready and able to jump ship with your inevitable soiling of the ship your on.

  31. There are real constraints in the real world of mission-critical embedded systems:
    1. response time (latency). Miss the interrupt, and the plane crashes
    2. limited resources. 4k..8k of RAM. 1K..2K of EEPROM. 64K..256K (extravagant) of FLASH.
    3. You own *all* of memory, or you don’t – and if you don’t, the system will crash at the worst possible time, killing people. BTW – impossible to replicate, thus debug
    4. power is limited in some systems
    5. determinism. A garbage collector kicks off and the system fails, killing people

    There are three possible outcomes when a mission-critical system fails:
    a) a lot of money is just *gone*, and more starts burning at a horrid rate. Think the phone system, or ebay, or amazon, or google.
    b) people die
    c) a *lot* of people die

    If the phone system goes down, and you are having a heart attack, you cannot call 911 – you gonna die
    If the plane goes down, people gonna die
    If the nuclear reactor does something bad (and remember Fucashima (sp) was only 5 years ago, or Chernobyl), *lots* of people gonna die

    There are plenty of things that can go wrong for a variety of reasons, and when those systems fail, *lots* of people die. Scary, huh? Do you want some kid who only knows a toy language to be one of the people doing those systems?

    Are there bad C coders out there? Yes, like any other language. But us old-farts remember the KISS method, and use good practices. We create systems that *work* and are easy to maintain.

    C, and simple C++, will be around a long time because you cannot just sit down and crank out avionics, or medical devices. It takes experience and skill to do these kinds of systems.

    The key to C++ is KISS – do not get enthralled by all of the fancy things, like (shudder) operator overloading.

    Be tricky in the algorithm. Be simple in the code – it should read like “run, dick, run”.

    IoT is still in the toy stage. I’m not sure why my toaster should talk to my refrigerator.

    And IoT cyber security is non-existent. I will agree there are embedded systems are vulnerable to cyber attack. I would point out this is a fairly new problem, and few have gotten it right, from embedded to huge enterprise systems.

  32. Have you considered Rust?

    • I didn’t mention Rust in the article, but it has come up a number of times in conversation. Care to describe it a bit?

  33. I want to do masters in computer science pls which area is the best to consider for research in computer science.

  34. I have 34 years of software expertise and I used dozens of languages, including all major ones in various market segments, but especially in the embedded world.
    IMO, C will continue to play some role for real embedded devices, at least for stuff like RT kernels, drivers, boot code. The only emerging candidate in this area (as C complement for more complex functionality) that I really like is Go, but it still needs some time to grow and provide a wider solution (on ARM and other cores) and a lighter solution (memory footprint).

Write a Reply or Comment

Your email address will not be published.