Here's for your kind consideration an interesting applied math/matrix algebra/row & column transformation/dimension reduction problem. Any specific insights will be immensely appreciated.
I made 12 independent operational-risk audits of 12 independent entities respectively, each employing a standard questionnaire. The questionnaire has 5 categories, with varying number of sub-categories. More specifically: Category 1 has subcategories, 1A, 1B, 1C, 1D, 1E. Category 2 has subcategories, 2A. Category 3 has subcategories, 3A, 3B, 3C, 3D, 3E, 3F, 3G. Category 4 has subcategories, 4A. Category 5 has subcategories, 5A, 5B, 5C, 5D.
Result of each question is Green (low-risk, 0-3/10), Yellow (moderate risk, 4-7/10), and Red (High risk, 8-10/10). Please refer to my attached audit report.
Now the problem: As much as possible, I would like to cluster all the reds in one corner, and all the greens in another, with yellows somewhere clustered around the center.
The entities (stacked along the y-axis) can be shuffled around, there is no sacrosanct order. The categories (stacked along the x-axis) can be shuffled around, there is no sacrosanct order. The sub-categories (within the categories) can be shuffled within the category, there is no sacrosanct order, as long as they continue to belong to their categories.
Kindly refer to the attachment and help me point in right direction on how this optimization may be achieved.
Thanks very much.