Read through the latest posts to learn more about Polyspace® products.

By Ram Cherukuri

Organizations and teams adopt various models (i.e., V and Agile) for their software development processes. Within each model, there are differences variations, depending on the requirements of the application, the industry, and the maturity of the workflow. There are additional variations depending on the different steps in the software development workflow. For example, some organizations include a formal code review as part of their development process, given its benefits in improving the defect detection rates. Others rely solely or heavily on testing activities. Given these wide variations, there are at least a couple of best practices applicable to most modern embedded software development workflows.

By Ram Cherukuri

Polyspace Code Prover™ uses the color orange to highlight operations that can't be automatically proven to be error free under all circumstances. You can then review potential run-time issues that might lead to robustness or reliability concerns.

By Ram Cherukuri, Fred Noto, and Alexandre Langenieux

CERT C is a set of guidelines for software developers and is used for secure coding in C language. It was developed on the CERT community wiki following a community based development process, with the first edition released in 2008 and the second edition released in 2014.

By Ram Cherukuri

Code generation greatly simplifies the MISRA compliance process. The key objectives of coding standards (such as MISRA) are readability, maintainability, and portability, in addition to ensuring safety and reliability. Because the models are at the core of the development process and code can be generated from the model in a consistent manner for different platforms, it simplifies the portability and maintainability pieces.

By Ram Cherukuri

The previous post highlighted the benefits of leveraging Polyspace static analysis to help optimize and reduce the length of the testing phase of the verification cycle. This post will discuss the inefficiencies of robustness testing and introduce how to address those challenges.

By Ram Cherukuri

Testing is a major part of the verification process at most embedded software development organizations. Studies estimate that around 25 – 30% of development time is spent on testing, and in some cases, this can be as high as 50% [1].

By Ram Cherukuri, Gary Ryu

The most recent version of the MISRA standard coding rules is MISRA C:2012, which succeeds MISRA C:2004 that has been widely adopted in the software community across industries for embedded systems.

By Ram Cherukuri, Stefan David

MISRA standard is a widely adopted coding standard across industries. It has become a commonplace best practice among embedded software development and quality assurance groups. A lot of these groups have a strict adherence policy to at least a subset of applicable rules—if not all of the coding rules. Such a compliance policy would require a review process to address the violations of the coding rules, and this process can often be resource intensive.

By Ram Cherukuri

This reminds me of the joke asking, “How many engineers does it take to change a light bulb?“

Many of our customers, especially those in the automotive industry, have used more than one static analysis tool as part of their software development and verification process.

One reason for the use of multiple tools is that, traditionally, the adoption of static analysis was fragmented into different activities such as coding rule compliance, bug finding, and so on. The development organization may have adopted a lint tool to perform local bug finding and a rule-checking tool to verify compliance to standards such as MISRA, while the quality assurance department may have adopted tools for code metrics such as code coverage, comment density, and cyclomatic complexity.

By Anirban Gangopadhyay and Ram Cherukuri

In part one of this two part series, we discussed robustness code verification, a method in which you verify your unit of code in isolation from the rest of the code base. We outlined a few examples, and we discussed the pros and cons of using this approach.

In this post, we will discuss contextual code verification, in which you verify your unit of code in context of the code base where the unit will be integrated. This post will walk you through the concepts behind contextual code verification using the same examples as in the last post, and it will then outline the best practices for using both types of code verification (robustness and contextual).

By Anirban Gangopadhyay and Ram Cherukuri

This is part one of a two part series outlining code verification methods.

We begin with a question: At what stage of software development should I verify my code?

The answer is simple. You should verify it right after you have compiled it, when the code is fresh on your mind. Once you are shown potential errors, reviewing and fixing those errors can be almost trivial. Fixing errors never gets easier after that stage in the workflow.

By Ram Cherukuri, Jeff Chapple, Stefan David, and Jay Abraham

Faster time-to-market trends could possibly be driving the misconception that static analysis is only about finding bugs. Software developers must eliminate as many bugs as possible and will use a quick bug finding tool, though it is likely that some bugs will remain. This practice may be sufficient for non safety-critical applications such as smartphone apps, but it may be insufficient for safety-critical applications. Safety-critical applications, therefore, require more rigorous methods to verify safety and robustness, which is where the other benefits of static analysis come in. In this article we will bust the misconception that static analysis is only about finding bugs, and prove that it can help verify compliance to coding standards, produce metrics about code quality, and be used at any stage of software development.

By Jay Abraham, Ram Cherukuri, and Christian Bard

In February 2014, technology blogs and news outlets were abuzz about a newly discovered vulnerability in Apple’s iOS iPhone, iPod, iPad, and Mac OS X devices. There was a problem in the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) code that could be exploited by what is known as Man in the Middle attack (MitM). The vulnerability was dubbed Goto Fail, and Apple quickly patched the defect with iOS 7.0.6 for its mobile platform and OS X 10.9.2 for the desktop platform.

Ask the Expert

Ram Cherukuri
Polyspace Static Analysis Notes Contact Expert