Don't set software optimisation as a goal too early in software development, instead implement simple slow techniques which are easy to check (and understand) and only develop complicated (fast) approaches when you have an answer against which to compare it. A good example is writing differential functions for minimisation routines, which are very difficult to write correctly the first time due to mathematical complexity, they should be compared against methods based on finite differences.
Since the advent of floating point co-processors integer arithmetic is now actually slower on most machines than floating point!
Condensed C source code does not run any faster once it is compiled, it's just harder to read.
Good algorithms should be bounded by statistical rather than numerical stability. It is therefore generally a good idea to perform all intermediate calculations in double precision but it is generally sufficient to represent all input and output data as floating point.
Watch out for bad random number generators, they will produce peculiar systematic effects in Monte-Carlo studies.