Once you have clustered your data via the k-means algorithm, you can definitely use the cluster centers as initial conditions for your Gaussian mixture clustering. The trick is that the initial condition inputs to the gmdistribution.fit functions must be in the proper form (a structure). More information on the function can be found in the documentation, here:
The other trick here is that the Gaussian mixture clustering routine requires three initial conditions. The initial cluster means (which you are providing from k-means), the initial cluster covariances (you can randomly initialize this), and the initial cluster weights (same as the initial covariances).
To help get you started, here is some example code:
dataLength = 5000;
muData = [5 30];
stdData = [4 10];
dataVec = [muData(1) + stdData(1)*randn(dataLength/2,1); ...
muData(2) + stdData(2)*randn(dataLength/2,1)];
numberOfClusters = 2;
[~,kMeansClusters] = kmeans(dataVec,numberOfClusters);
gmInitialVariance = 0.1;
initialSigma = cat(3,gmInitialVariance,gmInitialVariance);
initialWeights = [0.5 0.5];
S.mu = kMeansClusters;
S.Sigma = initialSigma;
S.PComponents = initialWeights;
gmmOfData = gmdistribution.fit(dataVec,numberOfClusters,'Start',S);
Hope this helps and good luck!