It really depends on what exactly you did...
To compute the extrinsics you need a picture of a checkerboard. So most of the error in the pose would come from the error in localizing the checkerboard points.
To use cameraPose you need to know the fundamental matrix. If you get it from the estimateFundamentalMatrix function, then you first have to get matching points between two images. Now the error in the pose depends on the error in localizing two sets of points, and on how correct the matches are.
So if this is what you did, then it makes sense for the extrinsics function to be more accurate. On the other hand, if you computed the fundamental matrix analytically, then this reasoning does not apply. So, more information about how exactly you have set up your experiment would help.