As posts on this thread have indicated, while tables are often the right data structure for the job, their performance in scalar indexing is not comparable to that of types such as double and struct. While there have been significant performance improvements since the initial release in R2014b (e.g. writetable), and those improvements will continue, tables are best when operations can be vectorized. That's often true even with plain old double matrices. It's also best to pre-allocate a table rather than growing it row by row, and again, that's true even for double matrices.
In situations where code cannot be vectorized, perhaps because the results of one iteration of a loop affect subsequent iterations, it's often possible to encapsulate the body of a loop into a function that you call by passing it a table's variables using dot subscripting, and assign back to a table's variables, rather than completely rewriting code to not use tables. It often looks something like this:
[t.X,t.Y,t.Z] = fun(t.A,T.B,t.C)
where fun is a loop that works on separate arrays. Even when it's not desirable to encapsulate the code in a function body, it's often possible to "hoist" a small number of variables out of a table and into the workspace before a loop, have the loop work on them, and then put them back in the table. In other words, if performance is an issue, consider replacing the bottlenecks with code that uses lower-level data types rather than completely avoiding tables.