By Mark Corless and Eric Cigan, MathWorks
In this article you will learn how modeling helped a small team of algorithm and embedded software engineers design a motor control algorithm and implement it on a programmable system-on-chip (SoC). We were the embedded engineers on this team. We will show how modeling helped us partition our design, balance functional behavior with implementation resources, and test in the lab.
Programmable SoCs such as Xilinx® Zynq® SoCs and Altera® SoC FPGAs, which combine programmable logic and microprocessor cores on the same chip, have given design teams new platforms for algorithms deployment in a wide range of applications, including embedded vision, communications, and control of motors and power electronics. These design teams typically include two categories of engineers: algorithm engineers, responsible for conceptual development and elaboration of math-based or rule-based algorithms, and embedded engineers, responsible for refining the algorithms and implementing them in software or hardware on the embedded device.
Algorithm engineers commonly use modeling early in the development process to gain confidence that their algorithms are functionally correct for their application. Embedded engineers, on the other hand, don’t always see the benefits of modeling. However, when these teams do not work closely together, the result can be late error detection, causing project delay; excessive resource use; or compromised functionality due to inadequate design and test iterations.
We set out to see whether modeling could help both algorithm and embedded engineers create a more efficient and collaborative design process. We wanted to focus on modeling algorithm components that we could explore using simulation. We would use simulation to help us make partitioning decisions, use simulation and code generation to balance functional behavior with implementation resources, and automate integration and deployment of the generated code and hand code to make more efficient use of lab time.
We proposed a workflow that would be a mix of code generated from models and hand code. (Throughout the article we will refer to the hand-coded portion of the design as the reference design.) We would begin with models provided by the algorithm developer and iteratively elaborate the models by adding implementation details. At each iteration we would simulate system behavior to ensure the functional correctness of the algorithm models, implement the algorithms with code generation to obtain code that behaved like the model, and then automate integration with our reference design to ensure a repeatable process to get to hardware implementation (Figure 1).
For this case study we decided to design a velocity controller for a permanent magnet synchronous motor using a field-oriented control (FOC) algorithm, and then to deploy it to a Zynq-7000 All Programmable SoC Intelligent Drives Kit II (Figure 2). We chose motor control because it is an application where algorithm engineering and embedded engineers often need to work together. We chose the Zynq Intelligent Drives Kit II because it was readily available and offered the I/O support we required.
The Zynq Intelligent Drives Kit II is a development platform used by engineers who want to test motor control algorithms running on a Zynq Z-7020 SoC device. Based on the ZedBoard development board, the kit includes an Analog Devices FMC motor control module and a 24V brushless DC motor equipped with a 1,250 cycles/revolution encoder. Because we wanted to test motor control algorithms under a range of operating conditions, we used the Zynq Intelligent Drives Kit II with an optional dynamometer system.
After selecting the hardware platform, we reviewed an initial system simulation model provided by the algorithm engineer and identified additional algorithm components that would be required for deployment to the SoC. The model included a controller algorithm for a motor based on data sheet parameters. This algorithm consisted of an outer velocity control loop regulating an inner current control loop using FOC.
Although this model captured the core mathematics of the controller, it did not take into consideration the effects of peripherals (such as ADC, encoder, and PWM) or algorithm components required for other modes of operation (Disabled, Open Loop, and Encoder Calibration). We worked with the algorithm engineer to identify which algorithm components to model and decide whether to implement those components on the ARM or the programmable logic on the SoC (Figure 3).
We elaborated the initial system model to include the new algorithm components (Figure 4). To enable system simulation we created lumped parameter models of existing peripherals that interact with the motor model. For example, we had existing HDL code for the encoder peripheral that we planned to reuse in the deployed design. The encoder peripheral reads a stream of digital pulses at 50 MHz and translates them into count signals read by the controller algorithm at 25 kHz. If we directly modeled this pulse stream, we would introduce 50 MHz dynamics in the system model and significantly increase simulation time. Instead, we created a lumped-parameter model of the encoder which converts the ideal rotor position from the motor model into the encoder counts signal seen by the algorithm components. Modeling at this level of fidelity enabled us to simulate startup conditions required to test the Encoder Calibration component as well as introduce position quantization effects to test the Velocity Control component (Figure 5) while maintaining reasonable simulation times.
We chose to implement algorithm components on the ARM if they required rates of a few kHz or less. The constraint of a few kHz rates was set because we planned to run a Linux® operating system on the ARM. Algorithm components requiring faster rates would be implemented on the FPGA.
We wanted to implement algorithm components on the ARM whenever possible because we found that design iterations were faster on the ARM than on the FPGA. It was easier to target the algorithm to the ARM core because it supported native floating-point math operations. Most FPGAs perform floating-point math inefficiently, so targeting programmable logic requires the additional step of converting algorithms to fixed point. In addition, we found the process of compiling C code for the ARM was generally faster than compiling HDL code for the FPGA.
We used simulation to determine whether algorithm components could be executed at rates slow enough for the ARM or if the FPGA was required. For example, the algorithm engineer initially proposed an encoder calibration routine that ran at 25 kHz, which would have to be implemented on the FPGA. We used simulation to test whether we could run the encoder calibration component at 1 kHz, found that we could, and decided to implement it on the ARM.
Once we had functionally correct models with the desired component rates, we grouped all components intended for C code generation into an algorithm C model and all components intended for HDL code generation into an algorithm HDL model (Figure 6). We then iteratively added implementation details to the models and generated code until we felt it would fit within an acceptable amount of memory and execute at the component rate.
We used Embedded Coder® to generate C code from the algorithm C model and generate a report summarizing the calling interface and estimated data memory usage. While reviewing the report we realized that all the data types were double-precision floating point. We wanted the data that would interface to the FPGA to be integer or fixed point and the rest of the mathematics to be single-precision floating point. We applied these data types to the model, used simulation to verify the behavior was still acceptable, then generated the improved code. At this point we felt confident that the code was suitable for implementation on the ARM.
We implemented the algorithm HDL model as fixed point since fixed-point operations consume fewer resources on FPGAs. To achieve this, we worked with the algorithm engineer to identify and bound key signal ranges in the design (current, voltage, and velocity), then used Fixed-Point Designer™ to define fixed-point data types that would ensure calculations did not overflow. We used HDL Coder™ to generate code and a summary report.
We reviewed the resource estimation section of the report to identify math operations that seemed unexpectedly large. For example, our initial selection of word lengths resulted in several multiplications of two 34-bit numbers, which we felt would needlessly consume FPGA resources. We were able to identify this issue in the resource utilization report, reduce the precision in the model, use simulation to verify functionality was still correct, and then generate the improved code. We used Xilinx Vivado® Design Suite to synthesize the code and verify that it met timing requirements.
Once we had a candidate algorithm implementation, we were ready to integrate it with our reference design. We started by manually integrating the generated C function with our hand-coded ARM embedded project and integrating the generated HDL entity with our hand-coded Vivado project. However, we realized that if we always performed the integration manually, we would need to be involved in every design iteration in the lab. One of our goals in using this workflow was to enable the algorithm engineer to automate the integration and deployment process in the lab.
We used the HDL Coder Support Package for Xilinx Zynq-7000 Platform to register our hand-coded Vivado project as a reference design. We were then able to automate integration of the generated algorithm HDL code with our hand code, build a bitstream, and download it to the FPGA. We used the Embedded Coder Support Package for Xilinx Zynq-7000 Platform to automate the integration of the generated algorithm C code with a Linux operating system, build an executable, download it to the ARM, and interact with it from Simulink®. The support packages provided the AXI interconnect that enabled communication between algorithm components in the ARM core and programmable logic.
During the initial system setup it was essential for the algorithm and embedded engineers to work together in the lab. As the embedded engineers, we had to set up the deployment configuration and work with the algorithm engineer to verify basic functionality. Once the system was set up, the algorithm engineer could independently iterate on the design using Simulink as the primary interface to the SoC.
The algorithm engineer tested the deployed controller and determined that it did not deliver the expected response. Comparison of the simulation and hardware results showed that we had incorrectly calculated the mapping of ADC count to current. The algorithm engineer created additional tests to better characterize the torque constant of the motor and improve the correlation between simulation and hardware (Figure 7).
The high correlation between the simulation and hardware test results gave us confidence that we could make design decisions at the model level and reduce lab time even further. For example, at one point the motor was spinning in the lab but became uncontrollable under certain conditions. We theorized that the issue was related to an overflow in the fixed-point velocity calculation implemented on the FPGA. We reproduced the issue in simulation and identified a flaw in the initial assumptions about the maximum speed of the motor. We were able to debug and resolve the issue in simulation, and only used lab time to verify the change.
The workflow described here enabled us to work more efficiently with the algorithm engineer. Through simulation we assessed the effect of algorithm partitioning on system performance and verified that the encoder calibration component could be moved from higher-rate programmable logic partition to the lower-rate ARM partition.
Simulation also allowed us to make decisions that conserved implementation resources while maintaining functional behavior, such as reducing word length of math operations in the programmable logic, or converting data to be passed through the AXI interconnect from floating-point to fixed-point data types. Finally, our prototype testing in the lab helped us identify errors in mapping ADC count to current, and enabled our algorithm engineer to run further testing to characterize the motor’s torque constant.
Overall, the workflow supported a close collaboration between us and the algorithm engineer, producing a more efficient implementation while economizing on lab time.
For further details on the workflow described in this article, review Field-Oriented Control of a Permanent Magnet Synchronous Machine. This Zynq motor control example includes the Simulink models and MATLAB® scripts used in our study to run simulations, generate code, test hardware, and compare simulation runs with results from hardware testing.
If you are interested in prototyping motor control algorithms or want to reproduce the results shown in this article and in the example, learn more about the Avnet Zynq Intelligent Drives Kit II from Avnet Electronics Marketing.
To extend the Zynq motor control example to different hardware configurations or to different classes of SoC devices from Xilinx or Altera, consult the example Define and Register Custom Board and Reference Design for SoC Workflow.
For further insight on how to produce accurate models of PMSM and BLDC motors for use with Simulink, review the article Creating a High-Fidelity Model of an Electric Motor for Control System Design and Verification.