Hip Exoskeleton:Motion Recognition Based on Deep learning

Version 1.0.2 (7.85 MB) by 동윤
This code is significant in achieving high accuracy motion recognition using only an encoder through AI
25 Downloads
Updated 31 Jul 2024

View License

Project Name
Hip Exoskeleton : Motion Recognition Algorithm Based on Deep Learning by Encoder
AJOU University, Mechatronics Engineering
AML
Project Abstract
The hip-based exoskeleton provides appropriate torque during tasks to effectively reduce the muscle load on wearers, as demonstrated through electromyography (EMG) sensor-based human performance evaluation. For instance, the assistive torque for lifting tasks is 5-10 times stronger than the torque for walking joints. To deliver appropriate torque for various tasks, motion recognition algorithms are employed. Traditional exoskeletons use multiple sensors and encoders for motion recognition, which increases the load and can interfere with task performance. Moreover, existing exoskeletons, which rely on real-time angle measurement, struggle with fast-changing hip movements, leading to delayed assistive torque and reduced performance. To address these issues, a deep learning-based hip motion recognition algorithm is proposed, utilizing only encoders to quickly recognize task and walking motions and provide suitable torque, thereby reducing the load and enhancing performance.
Expected Benefits
Traditional exoskeleton control systems often rely on feedback methods such as DOB and impedance control, which introduce time delays. In contrast, deep learning-based recognition algorithms enable rapid classification through artificial neural networks. The proposed model utilizes only encoders, reducing the need for additional sensors like IMUs, thereby decreasing system latency and enhancing real-time data processing. This leads to a more responsive exoskeleton that can quickly adapt to changes in motion, maintaining balance and improving task performance. The deep learning-based algorithm not only optimizes hip exoskeletons but can also be applied to exoskeletons for other body parts and assistive devices for medical rehabilitation, providing efficient joint torque support in various tasks and settings.
More Information
Readme
1. File Name: CNN_train.mlx
Description
This script is used to load, preprocess, and train a convolutional neural network (CNN) on motion activity data. The steps involved include loading data, preprocessing, data partitioning, CNN model definition, training, evaluation, and saving the trained model.
Key Features
  1. Data Loading: Load data from CSV files.
  2. Data Preprocessing: Process the data into the required format for training.
  3. Data Partitioning: Split the data into training, validation, and test sets.
  4. CNN Model Definition: Define the architecture of the CNN.
  5. Training Options: Set training options and parameters.
  6. Model Training: Train the CNN using the training data.
  7. Model Evaluation: Evaluate the model on the test set and display results.
Additional Information
  • Classes: The motion activities are classified into different categories such as Standing, Level Ground Walking, Inclined Walking, Lifting.
  • Paths: Ensure that the directory paths are correctly set before running the script.
  • Dependencies: This script assumes the presence of MATLAB and relevant toolboxes for deep learning and data processing.
Code Description
Data Loading
clc;
clear all;
datadirection = '...\4class';
cd(datadirection)
datalists = dir('*.csv');
len=0;
for i = 1 : length(datalists)
total_data(len+1: len + length(readmatrix(datalists(i).name)), :) = readmatrix(datalists(i).name);
len = len + length(readmatrix(datalists(i).name));
end
inputdata = total_data(:,1:4);
targetdata = total_data(:, 5);
Load CSV Files: The script changes the directory to the specified path and loads all CSV files.
The data from each file is concatenated into a single matrix.
Dataset Variable
inpust data: 4Column data [Right angle, Right anlgular velocity, Left angle, Left angular velocity]
targetdata : classification index
Data Partitioning
% Data size
[m, n] = size(inputdata);
step = 50;
% 50 step dataset calculation to current entire dataset
new_m = floor(m / step);
% Set data
newinput = cell(new_m,1);
newtarget = zeros(new_m,1);
% 50 step data save
for i = 1:new_m
newinput{i} = inputdata((i-1)*step+1:i*step,:);
newtarget(i) = mean(targetdata((i-1)*step+1:i*step,:));
end
newtarget = round(newtarget);
% Transform 4D double
newinput = cat(4, newinput{:});
Aggregate Data:
The script groups the data in steps of 50 rows and calculates the average target value for each group.
The processed data is then converted into a 4- dimensional array suitable for CNN input.
c = cvpartition(length(newinput),"Holdout",0.40);
trainingdata = newinput(:,:,:,training(c));
trainingtarget = newtarget(training(c));
testdata = newinput(:,:, :,test(c));
testtarget = newtarget(test(c));
cv = cvpartition(length(testdata), 'HoldOut', 0.50);
validdata = testdata(:,:,:,test(cv));
validtarget = testtarget(test(cv));
testdata = testdata(:,:,:,training(cv));
testtarget = testtarget(training(cv));
Proportion of dataset :
training data - 60%
validation data - 20%
test data - 20%
Input Data size check
inputSize = size(newinput,4);
inputSize_colunm = size(newinput,2);
inputSize_row = size(newinput,1);
unique_classes = unique(newtarget);
numClasses = length(unique_classes);
CNN Model Definition
layers = [ ...
imageInputLayer([inputSize_row,inputSize_colunm,1])
% First convolutional block
convolution2dLayer([10, 4], 8, 'Padding', 'same', 'Name', 'conv1')
batchNormalizationLayer('Name', 'batchnorm1')
reluLayer('Name', 'relu1')
% Second convolutional block
convolution2dLayer([10, 4], 16, 'Padding', 'same', 'Name', 'conv2')
batchNormalizationLayer('Name', 'batchnorm2')
reluLayer('Name', 'relu2')
% Third convolutional block
convolution2dLayer([10, 4], 32, 'Padding', 'same', 'Name', 'conv3')
batchNormalizationLayer('Name', 'batchnorm3')
reluLayer('Name', 'relu3')
% Fully connected layers
fullyConnectedLayer(64, 'Name', 'fc1')
dropoutLayer(0.2, 'Name', 'dropout1')
reluLayer('Name', 'relu6')
fullyConnectedLayer(32, 'Name', 'fc2')
dropoutLayer(0.2, 'Name', 'dropout2')
reluLayer('Name', 'relu7')
fullyConnectedLayer(numClasses, 'Name', 'fc3')
softmaxLayer('Name', 'softmax')
classificationLayer('Name', 'output')];
  • Input Layer : The input layer matches the size of the input data.
  • Convolutional Layers : Three convolutional layers with [ 10 x 4 ] filter sizes and batch normalization.
  • Fully Connected Layers : Three fully connected layers with dropout for regularization.
  • Output Layer : The final classification layer matches the number of classes in the target data
  • Filter Size : [10X4]
Training Options
options = trainingOptions('adam', ...
'MaxEpochs', 300, ...
'MiniBatchSize', 32, ...
'InitialLearnRate', 0.001, ...
'Shuffle', 'every-epoch', ...
'Verbose', false, ...
'Plots', 'training-progress', ...
'ValidationData', {validdata, categorical(validtarget)});
  • Training Parameters: Set training parameters such as the optimizer (adam), maximum epochs, mini-batch size, learning rate, and validation data.
Model Training
net = trainNetwork(trainingdata, categorical(trainingtarget), layers, options);
Train Network: Train the CNN using the training data and specified options.
Model Evaluation
Pred = classify(net, testdata);
accuracy = sum(Pred == categorical(testtarget)) / numel(testtarget);
disp(['Test Accuracy: ', num2str(accuracy)]);
Prediction and Accuracy: The model's predictions on the test data are compared with the actual labels to calculate accuracy.
Confusion Matrix
confMat = confusionmat(categorical(testtarget), Pred);
figure;
confusionchart(confMat);
title('Confusion Matrix');
Confusion Matrix: Display the confusion matrix to visualize the performance of the classifier.
Class
Stading : 1
Level Ground Walking: 2
Inclined Walking : 3
Lifting : 4
Save Model
save('your model name','net')
Save Trained Model: Save the trained network to a .mat file for Realtime_BufferWithApp.mlx
2. File Name: Hipexo_QDDmotor.ino
Aruduino code ( CAN통신과 QDD motor Encoder)
Arduino 코드는 C++언어 기반으로 코드 설명용으로 작성하였습니다. 해당 코드를 실행하면 실제 실행되지 않습니다.
따라서, 실행 시 Arduino IDE 프로그램을 다운받아야합니다.
Code description
Include Library
#include <mcp_can_dfs.h>
#include <mcp_can.h>
#include <SPI.h>
Variable and Set Pin number
#define UP A1
#define DOWN A3
#define RIGHT A5
#define CLICK A4
#define LEFT A2
#define LED2 8
#define LED3 7
#define P_MIN -12.5f
#define P_MAX 12.5f
#define V_MIN -35.0f
#define V_MAX 35.0f
#define KP_MIN 0.0f
#define KP_MAX 500.0f
#define KD_MIN 0.0f
#define KD_MAX 5.0f
#define T_MIN -9.0f
#define T_MAX 9.0f
#define ID_MIN -9.0f
#define ID_MAX 9.0f
float p_in = 0.0f;
float v_in = 0.0f;
float kp_in = 5.00f;
float kd_in = 0.5f;
float t_in = 0.0f;
float p_out1 = 0.0f;
float v_out1 = 0.0f;
float t_out1 = 0.0f;
float t_f1 = 0.0f;
float p_outf1 = 0.0f;
float p_inf1 = 0.0f;
unsigned long id_int = 0;
int id_out;
unsigned long timer =0;
bool onoff;
int indice1;
int indice2;
float bufp[4];
const int SPI_CS_PIN = 10; //Can Board OUTPUT PIN NUM
MCP_CAN CAN (SPI_CS_PIN); //Set CS pin
Setup mode
void setup() {
Serial.begin(115200);
delay(1000);
while (CAN_OK != CAN.begin(CAN_1000KBPS))
{
Serial.println("CAN BUS Shield init fail");
Serial.println(" Init CAN BUS Shield again");
delay(100);
}
Serial.println("CAN BUS Shield is ok!");
//Intialize pins as necessary
pinMode(UP, INPUT);
pinMode(DOWN, INPUT);
pinMode(LEFT, INPUT);
pinMode(RIGHT, INPUT);
pinMode(CLICK, INPUT);
pinMode(LED2, OUTPUT);
pinMode(LED3, OUTPUT);
digitalWrite(UP, HIGH);
digitalWrite(DOWN, HIGH);
digitalWrite(LEFT, HIGH);
digitalWrite(RIGHT, HIGH);
digitalWrite(CLICK, HIGH);
digitalWrite(LED2, LOW);
digitalWrite(LED3, LOW);
}
Loop
void loop() {
onoff = !onoff;
if (onoff == 1)
{
indice1 = 1;
indice2 = 0;
}else if (onoff == 0)
{
indice1 = 0;
indice2 = 1;
}
if (digitalRead(RIGHT) ==LOW)
{
//Enable
EnterMotorMode();
Zero();
digitalWrite(LED2, HIGH);
}
if (digitalRead(LEFT)==LOW)
{ Zero();
ExitMotorMode();
digitalWrite(LED2,LOW);
}
//send CAN
pack_cmd();
//receive CAN
if(CAN_MSGAVAIL == CAN.checkReceive())
{
unpack_reply();
}
if (onoff == 0){
bufp[0]= p_outf1;
bufp[1]= v_out1*80;
}
if (onoff == 1){
bufp[2]= -p_outf1;
bufp[3]= -v_out1*80;
Serial.print(bufp[0]);
Serial.print(" ");
Serial.print(bufp[1]);
Serial.print(" ");
Serial.print(bufp[2]);
Serial.print(" ");
Serial.println(bufp[3]);
}
Joystick method
  1. Right: EnterMotorMode execution and initial encorder Zero setting
  2. Left : ExitMotorMode execution and encorder Zero setting
Serial Ouput
1. Data: 4 Column size [Right angle, Right angular velocity, Left angle, Left angular velocity] - *Stirng
2. Two Can bus transmit simultaneously but, couldn't receive at the one time. So, Bufp is a buffer that can receive data alternately.
1) Right QDD motor
2) Left QDD motor
Unpackreply Fuction
void unpack_reply() {
byte len = 0;
byte buf[8];
unsigned long canId;
CAN.readMsgBuf(&len, buf);
unsigned int id = buf[0];
unsigned int p_int = (buf[1] << 8) | buf[2];
unsigned int v_int = (buf[3] << 4) | (buf[4] >> 4);
unsigned int i_int = ((buf[4] & 0xF) << 8) | buf[5];
/// convert uints to floats ///
p_out = uint_to_float(p_int, P_MIN, P_MAX, 16);
v_out = uint_to_float(v_int, V_MIN, V_MAX, 12);
t_out = uint_to_float(i_int, -T_MAX, T_MAX, 12);
t_f = t_out*1.5;
p_outf = p_out*180/3.14;
}
QDD Encored - Arduino Can Bus's transmission data type is 8byte buffer. This fuction is for receiving data
/// CAB reply Oacjet structure///
16 bit position, between -4*pi and 4*pi
12 bit velocity, between -30 and +30 rad/s
12 bit current, between -40 and 40;
CAN Packet is 5 8-bit words
Formatted as follows. For each quantity, bit 0 is LSB
0: [position [15-8]]
1: [position[7-0]]
2: [velocity [11-4]]
3: [velocity [3-0], current [11-8]]
4: [current [7-0]]
EnterMotorMode Fuction
void EnterMotorMode(){
byte buf[8];
buf[0] = 0xFF;
buf[1] = 0xFF;
buf[2] = 0xFF;
buf[3] = 0xFF;
buf[4] = 0xFF;
buf[5] = 0xFF;
buf[6] = 0xFF;
buf[7] = 0xFC;
CAN.sendMsgBuf(0x02,indice2, 8, buf);
CAN.sendMsgBuf(0x01,indice1, 8, buf);
}
To turn on QDD motor, send 8 byte buffer at QDD motor controller
ExitMotorMode Fuction
void ExitMotorMode(){
byte buf[8];
buf[0] = 0xFF;
buf[1] = 0xFF;
buf[2] = 0xFF;
buf[3] = 0xFF;
buf[4] = 0xFF;
buf[5] = 0xFF;
buf[6] = 0xFF;
buf[7] = 0xFD;
CAN.sendMsgBuf(0x02, indice2, 8, buf);
CAN.sendMsgBuf(0x01, indice1, 8, buf);
}
To turn off QDD motor, send 8 byte buffer at QDD motor controller
Zero Fuction
void Zero(){
byte buf[8];
buf[0] = 0xFF;
buf[1] = 0xFF;
buf[2] = 0xFF;
buf[3] = 0xFF;
buf[4] = 0xFF;
buf[5] = 0xFF;
buf[6] = 0xFF;
buf[7] = 0xFE;
CAN.sendMsgBuf(0x02, indice2, 8, buf);
CAN.sendMsgBuf(0x01, indice1, 8, buf);
}
To set current position to 0 deg in encorder, send 8 byte buffer at QDD motor controller
void pack_cmd(){
byte buf[8];
///limit data to be withing bounds///
float p_des = constrain(p_in, P_MIN, P_MAX);
float v_des = constrain(v_in, V_MIN, V_MAX);
float kp = constrain(kp_in, KP_MIN, KP_MAX);
float kd = constrain(kd_in, KD_MIN, KD_MAX);
float t_ff = constrain(t_in, T_MIN, T_MAX);
unsigned int p_int = float_to_uint(p_des, P_MIN, P_MAX, 16);
unsigned int v_int = float_to_uint(v_des, V_MIN, V_MAX, 12);
unsigned int kp_int = float_to_uint(kp, KP_MIN, KP_MAX, 12);
unsigned int kd_int = float_to_uint(kd, KD_MIN, KD_MAX, 12);
unsigned int t_int = float_to_uint(t_ff, T_MIN, T_MAX, 12);
buf[0] = p_int >> 8;
buf[1] = p_int & 0xFF;;
buf[2] = v_int >> 4;
buf[3] = ((v_int & 0xF) <<4) | (kp_int >>8);
buf[4] = kp_int & 0xFF;
buf[5] = kd_int >>4;
buf[6] = ((kd_int & 0xF) <<4) | (t_int >>8);
buf[7] = t_int & 0xFF;
CAN.sendMsgBuf(0x02, indice2, 8, buf);
CAN.sendMsgBuf(0x01, indice1, 8, buf);
}
QDD Encored - Arduino Can Bus's transmission data type is 8byte buffer. This fuction is for trasmitting data
///CAN Command Packet Structure///
16 bit position command, between -4*pi and 4*pi
12 bit velocity command, between -30 and +30rad/s
12 bit kp, between 0 and 500 N-m/rad
12 bit kd, between 0 and 100 N-m*s/rad
12 bit feed forward toque, between -18 and 18 N-m
///CAN packet is 8 8-bit words
Formatted as follows. For each quantity, bit i0 is LSB
0: [position[15-8[[
1: [position [7-0]]
2: [velocity[11-4]]
3: [velocity[3-0], kp[11-8]]
4: [kp[7-0]]
5: [kd[11-4]]
6 :[kd[3-0], torque [11-8]]
7: [torque [7-0]]
Maping Fuction
unsigned int float_to_uint(float x, float x_min, float x_max, int bits)
{
float span = x_max-x_min;
float offset = x_min;
unsigned int pgg = 0;
if(bits==12){
pgg = (unsigned int) ((x-offset)*4095.0/span);
}
if(bits==16){
pgg = (unsigned int) ((x-offset)*65535.0/span);
}
return pgg;
}
float uint_to_float(unsigned int x_int, float x_min, float x_max, int bits)
{
float span = x_max-x_min;
float offset = x_min;
float pgg = 0;
if (bits==12){
pgg = ((float) x_int)*span/4095 + offset;
}
if (bits==16){
pgg = ((float) x_int)*span/65535.0 + offset;
}
return pgg;
}
Buffer data need to map about data size
Receiving Data : [uint - > float]
Trasmitting Data : [float - > unit]
We divided bit type to two case
12 bit : data size - 4096
16 bit : data size - 65536
3. File Name: arduino_realtime_BufferWithApp.mlx
MATLAB Code (Arduino 실시간 데이터 송수신/ CNN 데이터 판별)
This MATLAB script reads real-time data from a serial port, uses a pre-trained neural network model to predict torque, and displays the predictions on a real-time graph.
Requirements
  • MATLAB
  • Deep Learning Toolbox
  • 사전 훈련된 인공 신경망 모델 파일 (net)
  • 시리얼 포트 장치
Key Features
아두이노 실시간 수신:
1. Arduino에서 계산된 각도, 각속도의 데이터를 실시간 수신 받는다. [시리얼 통신 데이터]
2. 수신 데이터 형식 : 4Column [Right angle, Right anglar velocity, Left angle, Left anglar velocity] - *String
3. Baud Rate : 115200
Buffer :
1. Data_Buffer에 4 Column 데이터를 50개 쌓는다.
2. Buffer에 판별된 3개의 데이터를 쌓는다. - > 지배적인 결과의 데이터를 통해서 노이즈를 없애기 위함.
Machine Learning 판별:
1. 사전에 학습된 .mat파일 형식의 학습 데이터에서 network를 불러와 data를 classification 한다.
2. input size : 50x4
Visualization
APP Designer tool을 활용하여 데이터 결과를 시각화
Code description
Reset Variables
clc; clear all;
CNN Model load
load('your network path', 'net');
Please set your network file path
COM Port number and Set Baudrate
s = serialport("COM5", 115200);
configureTerminator(s, "LF");
fopen(s); % Open seiral port
Sets up and opens the serial port.
Initialization of Buffers and Variables
dataBuffer = [];
pred_Buffer = [];
consbuf1 = [];
consbuf2 = [];
pred_tempbuf=[0 0 0 0];
dataBuffer : For stacking 50 data from arduino serial data
pred_Buffer : To decrease recognition result noise, if stacked 3 data , dominant value is selected
consbuf : Accumulates control signals derived from the predictions.
pred_tempbuf : Temporarily holds the summed predictions.
Application Initialization
index = 1;
timeindex=0;
torqueindex1=1;
torqueindex2=1;
trcons = 0;
trcons2 = 0;
app=Realtime_Gauge(pred_tempbuf, index, timeindex, torqueindex1, torqueindex2);
Initializes the real-time gauge application with initial parameters.
Realtime Serial communication (Arduino -MATLAB)
while true
tempRow = [];
data = readline(s); % Read SerialPort Data
newData = str2num(data); % String Data -> number Data
tempRow = [tempRow, newData]; % Add data to Temporary row buffer
if ~isempty(tempRow)
dataBuffer = [tempRow; dataBuffer]; % Add data row by row
end
if size(dataBuffer, 1) >= 50
% Classify desired data
inputdata = reshape(dataBuffer, [size(dataBuffer, 1), size(dataBuffer, 2), 1]);
prediction = net.predict(inputdata);
if size(pred_Buffer, 1) == 3
pred_tempbuf(1, 1) = sum(pred_Buffer(:, 1));
pred_tempbuf(1, 2) = sum(pred_Buffer(:, 2));
pred_tempbuf(1, 3) = sum(pred_Buffer(:, 3));
pred_tempbuf(1, 4) = sum(pred_Buffer(:, 4));
[a, index] = max(pred_tempbuf);
app.pred_tempbuf = pred_tempbuf;
app.index = index;
app.updateImages();
drawnow;
switch index
case 1
trcons= 0;
trcons2= 0;
case 2
if newData(:,2) > 0
trcons= 1;
trcons2= 0;
else
trcons= 0;
trcons2= 1;
end
case 3
if newData(:,2) > 0
trcons= 3;
trcons2= 0;
else
trcons= 0;
trcons2= 3;
end
case 4
trcons=5;
trcons2= 5;
end
consbuf1(end+1)= trcons;
if width(consbuf1) > 20
consbuf1=consbuf1(:,width(consbuf1)-20:width(consbuf1));
end
consbuf2(end+1)= trcons2;
if width(consbuf2) > 20
consbuf2=consbuf2(:,width(consbuf2)-20:width(consbuf2));
end
timeindex = timeindex + 0.25;
app.timeindex = 0:0.25:timeindex-0.25;
if width(app.timeindex) > 20
app.timeindex= app.timeindex(:,width(app.timeindex)-20:width(app.timeindex))
end
app.torqueindex1 = consbuf1.*ones(size(app.timeindex));
app.torqueindex2 = consbuf2.*ones(size(app.timeindex));
plot(app.UIAxes, app.timeindex, app.torqueindex1,'b','LineWidth',3);
hold(app.UIAxes,'on')
plot(app.UIAxes, app.timeindex, app.torqueindex2,'r','LineWidth',3);
hold(app.UIAxes,'off')
legend(app.UIAxes,'Right','Left');
pred_tempbuf = [];
pred_Buffer(3, :) = [];
else
pred_Buffer = [prediction; pred_Buffer];
end
% Data Buffer reset
dataBuffer = [];
end
end
  • Continuously reads data from the serial port, converts it to numerical format, and appends it to the data buffer.
  • If the data buffer has at least 50 rows, it reshapes the data for the neural network.
  • The neural network predicts the torque based on the input data.
  • Accumulates predictions in a buffer until there are 3 sets of predictions.
  • Determines the most frequent prediction and updates the control signal accordingly.
  • Updates the time index and torque index for plotting.
  • Plots the updated torque predictions in real-time.
  • Clears the prediction buffer and data buffer for the next set of data.
4.File name : Realtime_Gauge.m
MATLAB Code (arduino_realtime_BufferWithApp에서 버퍼 데이터 수신 / GUI 구성)
버퍼 데이터 수신:
1. arduino_realtime_BufferWithApp 에서 pred_tempbuf, index, timeindex, torqueindex를 수신 받습니다.
2. pred_tempbuf : 가장 최근의 분류결과 3개를 더한 [1x4] 데이터
3. index: pred_tempbuf 중 최대값을 가지는 열의 인덱스 = 지배적인 동작을 나타냅니다.
4. timeindex, torqueindex: UIAxes 컴포넌트를 통해 Realtime torque graph를 그리기 위한 인자
동작 인식 시각화:
앱은 네 가지 움직임 상태(Standing, Walking, Inclined Walking, Lifting)를 모니터링하여 현재 상태를 이미지로 표시합니다.
  • Standing Gauge
Image1_1, Image1_2, Image1_3
  • Walking Gauge
Image2_1, Image2_2, Image2_3
  • Inclined Walking Gauge
Image3_1, Image3_2, Image3_3
  • Lifting Gauge
Image4_1, Image4_2, Image4_3
각 움직임 상태는 pred_tempbuf 배열 값에 따라 녹색(활성화), 빨간색(최대값), 회색(비활성화) 이미지로 업데이트합니다.
현재 상태 표시:
앱의 'DisplayLabel' 컴포넌트는 수신한 index를 이용하여 가장 지배적인 동작을 표기합니다.
이미지 파일 경로
'Realtime_Gauge' 앱은 여러 이미지를 사용하므로 이미지 파일('그림' 폴더)는 앱 파일과 동일한 디렉토리에 위치하여야 합니다.
Code description
AppBase Class
classdef Realtime_Gauge < matlab.apps.AppBase
The Realtime_Gauge class inherits from the MATLAB AppBase class.
Properties
properties (Access = public)
UIFigure matlab.ui.Figure
LiftingLabel matlab.ui.control.Label
InclinedWalkingLabel matlab.ui.control.Label
WalkingLabel matlab.ui.control.Label
StandingLabel matlab.ui.control.Label
DisplayLabel matlab.ui.control.Label
PredictedCostLabel matlab.ui.control.Label
MotionLabel matlab.ui.control.Label
Imagelogo matlab.ui.control.Image
Imagefulllogo1 matlab.ui.control.Image
Imagefulllogo2 matlab.ui.control.Image
Image_1 matlab.ui.control.Image
Image_2 matlab.ui.control.Image
Image_3 matlab.ui.control.Image
Image_4 matlab.ui.control.Image
Image1_1 matlab.ui.control.Image
Image1_2 matlab.ui.control.Image
Image1_3 matlab.ui.control.Image
Image2_1 matlab.ui.control.Image
...
Image4_3 matlab.ui.control.Image
pred_tempbuf double
index double
UIAxes matlab.ui.control.UIAxes
timeindex double
torqueindex1 double
torqueindex2 double
end
  • 'UIFigure': The main window of the app.
  • 'UIAxes': The graph area where real-time torque values are displayed.
  • 'Image' and 'Label' Components: Visual components used to represent the motion states.
Methods
'createComponents(app)' creates the UI components of the app.
% Component initialization
methods (Access = private)
% Create UIFigure and components
function createComponents(app)
% Get the file path for locating images
pathToMLAPP = fileparts(mfilename('fullpath'));
% Create UIFigure and hide until all components are created
app.UIFigure = uifigure('Visible', 'off');
app.UIFigure.Position = [100 100 978 780];
app.UIFigure.Name = 'MATLAB App';
% Create UIAxes
app.UIAxes = uiaxes(app.UIFigure);
title(app.UIAxes, 'Realtime Torque')
xlabel(app.UIAxes, 'Time')
ylabel(app.UIAxes, 'Torque')
zlabel(app.UIAxes, 'Z')
app.UIAxes.FontName = 'Arial Black';
app.UIAxes.FontSize = 18;
ylim(app.UIAxes,[-1 6]);
app.UIAxes.Position = [460 440 520 330];
% Create Image
app.Image_1 = uiimage(app.UIFigure);
app.Image_1.Position = [50 650 80 80];
app.Image_1.ImageSource = fullfile(pathToMLAPP, '그림/standing.png');
% Create Image_2
app.Image_2 = uiimage(app.UIFigure);
app.Image_2.Position = [50 500 80 80];
app.Image_2.ImageSource = fullfile(pathToMLAPP, '그림/walking.png');
% Create Image_3
app.Image_3 = uiimage(app.UIFigure);
app.Image_3.Position = [50 340 80 80];
app.Image_3.ImageSource = fullfile(pathToMLAPP, '그림/inclined_walking.png');
% Create Image_4
app.Image_4 = uiimage(app.UIFigure);
app.Image_4.Position = [50 200 80 80];
app.Image_4.ImageSource = fullfile(pathToMLAPP, '그림/lifting.png');
% Create Standing Gauge
app.Image1_1 = uiimage(app.UIFigure);
app.Image1_1.Position = [152 638 100 100];
app.Image1_2 = uiimage(app.UIFigure);
app.Image1_2.Position = [251 638 100 100];
app.Image1_3 = uiimage(app.UIFigure);
app.Image1_3.Position = [350 638 100 100];
% Create Walking Gauge
app.Image2_1 = uiimage(app.UIFigure);
app.Image2_1.Position = [152 480 100 100];
app.Image2_2 = uiimage(app.UIFigure);
app.Image2_2.Position = [251 480 100 100];
app.Image2_3 = uiimage(app.UIFigure);
app.Image2_3.Position = [350 480 100 100];
% Create Inclined Walking Gauge
app.Image3_1 = uiimage(app.UIFigure);
app.Image3_1.Position = [152 330 100 100];
app.Image3_2 = uiimage(app.UIFigure);
app.Image3_2.Position = [251 330 100 100];
app.Image3_3 = uiimage(app.UIFigure);
app.Image3_3.Position = [350 330 100 100];
% Create Lifting Gauge
app.Image4_1 = uiimage(app.UIFigure);
app.Image4_1.Position = [152 180 100 100];
app.Image4_2 = uiimage(app.UIFigure);
app.Image4_2.Position = [251 180 100 100];
app.Image4_3 = uiimage(app.UIFigure);
app.Image4_3.Position = [350 180 100 100];
% Create StandingLabel
app.StandingLabel = uilabel(app.UIFigure);
app.StandingLabel.Position = [60 629 120 22];
app.StandingLabel.Text = 'Standing';
app.StandingLabel.FontWeight = 'bold';
app.StandingLabel.FontSize = 15;
% Create WalkingLabel
app.WalkingLabel = uilabel(app.UIFigure);
app.WalkingLabel.Position = [60 479 120 22];
app.WalkingLabel.Text = 'Walking';
app.WalkingLabel.FontWeight = 'bold';
app.WalkingLabel.FontSize = 15;
% Create InclinedWalkingLabel
app.InclinedWalkingLabel = uilabel(app.UIFigure);
app.InclinedWalkingLabel.Position = [35 310 186 30];
app.InclinedWalkingLabel.Text = 'Inclined Walking';
app.InclinedWalkingLabel.FontSize = 15;
app.InclinedWalkingLabel.FontWeight = 'bold';
% Create LiftingLabel
app.LiftingLabel = uilabel(app.UIFigure);
app.LiftingLabel.Position = [60 171 186 30];
app.LiftingLabel.Text = 'Lifting';
app.LiftingLabel.FontSize = 15;
app.LiftingLabel.FontWeight = 'bold';
% Create MotionLabel
app.MotionLabel = uilabel(app.UIFigure);
app.MotionLabel.FontName = 'Arial Black';
app.MotionLabel.FontSize = 24;
app.MotionLabel.FontWeight = 'bold';
app.MotionLabel.Position = [50 740 100 35];
app.MotionLabel.Text = 'Motion';
% Create PredictedCostLabel
app.PredictedCostLabel = uilabel(app.UIFigure);
app.PredictedCostLabel.FontName = 'Arial Black';
app.PredictedCostLabel.FontSize = 24;
app.PredictedCostLabel.Position = [210 740 200 35];
app.PredictedCostLabel.Text = 'Predicted Cost';
% Create DisplayLabel
app.DisplayLabel = uilabel(app.UIFigure);
app.DisplayLabel.FontName = 'Arial Black';
app.DisplayLabel.FontSize = 80;
app.DisplayLabel.FontWeight = 'bold';
app.DisplayLabel.FontAngle = 'italic';
app.DisplayLabel.FontColor = [0.6353 0.0784 0.1843];
app.DisplayLabel.Position = [500 170 450 300];
app.DisplayLabel.HorizontalAlignment = 'center';
app.DisplayLabel.VerticalAlignment = 'center';
% Create Imagelogo
app.Imagelogo = uiimage(app.UIFigure);
app.Imagelogo.Position = [580 35 400 100];
app.Imagelogo.ImageSource = fullfile(pathToMLAPP, '그림/logoA.png');
% Create Imagefulllogo1
app.Imagefulllogo1 = uiimage(app.UIFigure);
app.Imagefulllogo1.Position = [190 35 400 100];
app.Imagefulllogo1.ImageSource = fullfile(pathToMLAPP, '그림/fulllogo.png');
% Create Imagefullogo2
app.Imagefulllogo2 = uiimage(app.UIFigure);
app.Imagefulllogo2.Position = [5 35 200 100];
app.Imagefulllogo2.ImageSource = fullfile(pathToMLAPP, '그림/fulllogo1.png');
% Show the figure after all components are created
app.UIFigure.Visible = 'on';
end
end
The constructor of the Realtime_Gauge class initializes the app, configures the UI, and updates the images using 'updateImages(app)'.
% App creation and deletion
methods (Access = public)
% Construct app
function app = Realtime_Gauge(pred_tempbuf, index, timeindex, torqueindex1, torqueindex2);
% Create UIFigure and components
createComponents(app)
% Handle input values as variables
app.pred_tempbuf = pred_tempbuf;
app.index = index;
app.timeindex= timeindex;
app.torqueindex1 = torqueindex1;
app.torqueindex2 = torqueindex2;
% Update images based on initial value of onoff
app.updateImages();
% Register the app with App Designer
registerApp(app, app.UIFigure)
if nargout == 0
clear app
end
end
% Code that executes before app deletion
function delete(app)
% Delete UIFigure when app is deleted
delete(app.UIFigure)
end
end
updateImages(app): Updates the image states based on the values of 'pred_tempbuf' and 'index'.
methods (Access = public)
% Method to update images based on the value of onoff
function updateImages(app)
pathToMLAPP = fileparts(mfilename('fullpath'));
% Standing Gauge onoff
if round(app.pred_tempbuf(1,1)) >= 1
imagePaths1_1 = '그림/green.png';
if round(app.pred_tempbuf(1,1)) >= 2
imagePaths1_2 = '그림/green.png';
if round(app.pred_tempbuf(1,1)) >= 3
imagePaths1_3 = '그림/green.png';
else
imagePaths1_3 = '그림/not.png';
end
else
imagePaths1_2 = '그림/not.png';
imagePaths1_3 = '그림/not.png';
end
else
imagePaths1_1 = '그림/not.png';
imagePaths1_2 = '그림/not.png';
imagePaths1_3 = '그림/not.png';
end
% Walking Gauge onoff
if round(app.pred_tempbuf(1,2)) >= 1
imagePaths2_1 = '그림/green.png';
if round(app.pred_tempbuf(1,2)) >= 2
imagePaths2_2 = '그림/green.png';
if round(app.pred_tempbuf(1,2)) >= 3
imagePaths2_3 = '그림/green.png';
else
imagePaths2_3 = '그림/not.png';
end
else
imagePaths2_2 = '그림/not.png';
imagePaths2_3 = '그림/not.png';
end
else
imagePaths2_1 = '그림/not.png';
imagePaths2_2 = '그림/not.png';
imagePaths2_3 = '그림/not.png';
end
% Inclined Walking Gauge onoff
if round(app.pred_tempbuf(1,3)) >= 1
imagePaths3_1 = '그림/green.png';
if round(app.pred_tempbuf(1,3)) >= 2
imagePaths3_2 = '그림/green.png';
if round(app.pred_tempbuf(1,3)) >= 3
imagePaths3_3 = '그림/green.png';
else
imagePaths3_3 = '그림/not.png';
end
else
imagePaths3_2 = '그림/not.png';
imagePaths3_3 = '그림/not.png';
end
else
imagePaths3_1 = '그림/not.png';
imagePaths3_2 = '그림/not.png';
imagePaths3_3 = '그림/not.png';
end
% Lifting Gauge onoff
if round(app.pred_tempbuf(1,4)) >= 1
imagePaths4_1 = '그림/green.png';
if round(app.pred_tempbuf(1,4)) >= 2
imagePaths4_2 = '그림/green.png';
if round(app.pred_tempbuf(1,4)) >= 3
imagePaths4_3 = '그림/green.png';
else
imagePaths4_3 = '그림/not.png';
end
else
imagePaths4_2 = '그림/not.png';
imagePaths4_3 = '그림/not.png';
end
else
imagePaths4_1 = '그림/not.png';
imagePaths4_2 = '그림/not.png';
imagePaths4_3 = '그림/not.png';
end
% Preprocess color for the maximum value
switch app.index
case 1
if round(app.pred_tempbuf(1,1)) == 3
imagePaths1_1 = '그림/red.png';
imagePaths1_2 = '그림/red.png';
imagePaths1_3 = '그림/red.png';
elseif round(app.pred_tempbuf(1,1)) == 2
imagePaths1_1 = '그림/red.png';
imagePaths1_2 = '그림/red.png';
end
app.DisplayLabel.Text = 'Standing';
case 2
if round(app.pred_tempbuf(1,2)) == 3
imagePaths2_1 = '그림/red.png';
imagePaths2_2 = '그림/red.png';
imagePaths2_3 = '그림/red.png';
elseif round(app.pred_tempbuf(1,2)) == 2
imagePaths2_1 = '그림/red.png';
imagePaths2_2 = '그림/red.png';
end
app.DisplayLabel.Text = 'Walking';
case 3
if round(app.pred_tempbuf(1,3)) == 3
imagePaths3_1 = '그림/red.png';
imagePaths3_2 = '그림/red.png';
imagePaths3_3 = '그림/red.png';
elseif round(app.pred_tempbuf(1,3)) == 2
imagePaths3_1 = '그림/red.png';
imagePaths3_2 = '그림/red.png';
end
app.DisplayLabel.Text = 'Inclined<br>Walking';
app.DisplayLabel.Interpreter = 'html';
case 4
if round(app.pred_tempbuf(1,4)) == 3
imagePaths4_1 = '그림/red.png';
imagePaths4_2 = '그림/red.png';
imagePaths4_3 = '그림/red.png';
elseif round(app.pred_tempbuf(1,4)) == 2
imagePaths4_1 = '그림/red.png';
imagePaths4_2 = '그림/red.png';
end
app.DisplayLabel.Text = 'Lifting';
end
% load Image
app.Image1_1.ImageSource = fullfile(pathToMLAPP, imagePaths1_1);
app.Image1_2.ImageSource = fullfile(pathToMLAPP, imagePaths1_2);
app.Image1_3.ImageSource = fullfile(pathToMLAPP, imagePaths1_3);
app.Image2_1.ImageSource = fullfile(pathToMLAPP, imagePaths2_1);
app.Image2_2.ImageSource = fullfile(pathToMLAPP, imagePaths2_2);
app.Image2_3.ImageSource = fullfile(pathToMLAPP, imagePaths2_3);
app.Image3_1.ImageSource = fullfile(pathToMLAPP, imagePaths3_1);
app.Image3_2.ImageSource = fullfile(pathToMLAPP, imagePaths3_2);
app.Image3_3.ImageSource = fullfile(pathToMLAPP, imagePaths3_3);
app.Image4_1.ImageSource = fullfile(pathToMLAPP, imagePaths4_1);
app.Image4_2.ImageSource = fullfile(pathToMLAPP, imagePaths4_2);
app.Image4_3.ImageSource = fullfile(pathToMLAPP, imagePaths4_3);
end
end
  • This code checks the values in app.pred_tempbuf for each gauge (Standing, Walking, Inclined Walking, Lifting).
  • If the value meets the specified thresholds (1, 2, 3), it sets the corresponding image paths to green.png.
  • If the thresholds are not met, it sets the image paths to not.png.
  • To Preprocess color for the maximum value, this code checks the value of app.index and the corresponding value in app.pred_tempbuf.
  • If the rounded value equals 2 or 3, it sets the image paths to red.png and updates the display label accordingly.

Cite As

동윤 (2025). Hip Exoskeleton:Motion Recognition Based on Deep learning (https://www.mathworks.com/matlabcentral/fileexchange/170471-hip-exoskeleton-motion-recognition-based-on-deep-learning), MATLAB Central File Exchange. Retrieved .

MATLAB Release Compatibility
Created with R2024a
Compatible with R2024a
Platform Compatibility
Windows macOS Linux
Tags Add Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

2024_MatlabAI

Version Published Release Notes
1.0.2

영상 추가

1.0.1

오타수정

1.0.0