You initialize a playing field of free space and obstacles.
You start at some particular location.
At any one point, you have a goal that you use to figure out which direction you would ideally like to go. You test to see if you can go in that direction. If so, you go there and loop back to the next step. If you cannot go in the most desired direction, you figure out the next best direction to go, and see if you can go there; if you cannot, then the next best after that, and so on. Unless this was the very first step and you happened to land in the one open spot surrounded by barriers, there is always somewhere you can go.
You will, however, find that in practice you need to be a bit more clever than the above.
Let R be the robot, and G be the goal. The "ideal" direction would be for the robot to move downward. But of course if it does, then it cannot go any further towards G, and needs to move back to , or the location it came from [R], or to  . If it moves back to [R] then it gets back to the place where the "ideal" move is straight down towards G... getting stuck again. If it moved to  or to , then in the next step, the "ideal" move would be to move diagonally down, getting stuck in the same rut again.
How to deal with this, how to avoid cycling through positions due to lack of long-term planning, or due to limited "visibility" not permitting you to know what the "best" step is, is key to robotics. You can research algorithms for this, or you can try to come up with one yourself.