Clear Filters
Clear Filters

What is the syntax to share support vectors among multiple regression SVMs?

7 views (last 30 days)
I am fitting 64 support vector machines to accomplish a regression task on 64-dimensional targets. At the moment, I am simply iterating through each regression and saving the individually-fit models in a cell array. I find that each model has ~2000 support vectors and would like to reduce the storage the models require by identifying which support vectors are shared among regressors and having the models point to the shared support vectors rather than duplicate them. Is there a wrapper function that can identify and consolidate shared support vectors as the models train, or must I implement this manually? I am afraid I cannot simply install Python to use sklearn.
  3 Comments

Sign in to comment.

Answers (1)

Shubham
Shubham on 27 Aug 2024
Hi Joseph,
To address the issue of shared support vectors across multiple Support Vector Machines (SVMs) in a regression task without duplicating them, you'll likely need to implement a custom solution, especially if you're constrained to a specific programming environment and cannot use Python's scikit-learn.
Here's a high-level approach to manually consolidate shared support vectors:
  1. After training each SVM, extract the support vectors and their indices. You can then compare these support vectors across different models to identify duplicates.
  2. Use a data structure (e.g., a dictionary or map) to store unique support vectors. Each unique support vector is stored once, and each model stores a reference (e.g., an index or pointer) to the shared support vector.
  3. Instead of storing the full support vectors in each model, store only the indices or references to the shared support vector storage.
  4. Adjust the model's prediction function to retrieve support vectors from the shared storage using the references.
% Assume models is a cell array storing the individual SVM models
% Assume supportVectors is a cell array to store support vectors for each model
sharedSupportVectors = containers.Map('KeyType', 'char', 'ValueType', 'any');
supportVectorIndices = cell(1, numModels);
for i = 1:numModels
model = trainSVM(X, Y(:, i)); % Train your model (replace with your training function)
supportVectors = model.SupportVectors;
indices = [];
for j = 1:size(supportVectors, 1)
sv = supportVectors(j, :);
svKey = mat2str(sv, 15); % Convert support vector to a string key
if isKey(sharedSupportVectors, svKey)
index = sharedSupportVectors(svKey);
else
index = length(sharedSupportVectors) + 1;
sharedSupportVectors(svKey) = index;
end
indices(end + 1) = index;
end
supportVectorIndices{i} = indices;
% Store only indices in the model
model.SupportVectorIndices = indices;
models{i} = model;
end
% Store sharedSupportVectors separately
% During prediction, retrieve support vectors using indices
Considerations:
  • The above approach uses a map to store unique support vectors, which can be efficient for lookup operations.
  • Ensure that the conversion of support vectors to string keys maintains numerical precision to avoid mismatches.
  • Modify the prediction logic in your models to access the shared support vectors using the stored indices.
This approach allows you to reduce storage requirements by avoiding duplication of support vectors across models. Implementing this manually gives you full control over the process and can be tailored to your specific environment and constraints.

Products


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!