Колюбин Сергей Алексеевич
Публикации:
Шамраев А., Колюбин С. А.
Bioinspired and Energy-Efficient Convex Model Predictive Control for a Quadruped Robot
2022, vol. 18, no. 5, с. 831-841
Подробнее
Animal running has been studied for a long time, but until now robots cannot repeat the
same movements with energy efficiency close to animals. There are many controllers for controlling
the movement of four-legged robots. One of the most popular is the convex MPC. This
paper presents a bioinspirational approach to increasing the energy efficiency of the state-of-theart
convex MPC controller. This approach is to set a reference trajectory for the convex MPC
in the form of an SLIP model, which describes the movements of animals when running. Adding
an SLIP trajectory increases the energy efficiency of the Pronk gait by 15 percent over a range
of speed from 0.75 m/s to 1.75 m/s.
|
Али В., Колюбин С. А.
EMG-Based Grasping Force Estimation for Robot Skill Transfer Learning
2022, vol. 18, no. 5, с. 859-872
Подробнее
In this study, we discuss a new machine learning architecture, the multilayer preceptronrandom
forest regressors pipeline (MLP-RF model), which stacks two ML regressors of different
kinds to estimate the generated gripping forces from recorded surface electromyographic activity
signals (EMG) during a gripping task. We evaluate our proposed approach on a publicly available
dataset, putEMG-Force, which represents a sEMG-Force data profile. The sEMG signals were
then filtered and preprocessed to get the features-target data frame that will be used to train
the proposed ML model. The proposed ML model is a pipeline of stacking 2 different natural
ML models; a random forest regressor model (RF regressor) and a multiple layer perceptron
artificial neural network (MLP regressor). The models were stacked together, and the outputs
were penalized by a Ridge regressor to get the best estimation of both models. The model was
evaluated by different metrics; mean squared error and coefficient of determination, or $r^2$ score,
to improve the model prediction performance. We tuned the most significant hyperparameters
of each of the MLP-RF model components using a random search algorithm followed by a grid
search algorithm. Finally, we evaluated our MLP-RF model performance on the data by training
a recurrent neural network consisting of 2 LSTM layers, 2 dropouts, and one dense layer on the
same data (as it is the common approach for problems with sequential datasets) and comparing
the prediction results with our proposed model. The results show that the MLP-RF outperforms
the RNN model.
|