From HackerspaceWiki
Jump to: navigation, search

Optimizing: the process of balancing all the prevailing forces to achieve awesomeness. Don't do it until the last -- as it is always platform-specific. Sinking too much time into making the highest-performing code will waste a lot of time and effort, when you find out you optimized the wrong problem. You could say the counter-force for this step is programmer time itself: while spending time on honing your code, you've wasted it away from better problems.

But while conquering your code, always keep in mind issues of memory consumption and resource utilization for your structures. It is here where the real programmers are separated from the wannabes. You might want to follow the OneTruePath.

Know that I/O is always one of the most intensive acts on the computer, so choose what you move into user-space carefully. You can check these costs with a profiler.

Some crude rules of thumb:

  • Disk I/O is 10000x times slower or more than processing,
  • RAM I/0 is about 10x slower,
  • Text I/O, about 10-100x slower,
  • Graphics I/O, 1000x slower.
  • Network I/O, 10000x slower. (UDP? ~1000x)

Once you've accomplished a correct program, then you can start to ensure that it doesn't use resources terribly. A good profiler can show all the CPU time used for each of your routines, giving your ideas of where to optimize. Then, you can go back and conquer your code.

But it's also here where you also might find that you've been working on the wrong problem. And rather than conquering, you should be synthesizeing. That's why you save advanced optimizing until last AND why very-high level programming languages like Python have arisen. You can tackle the structure of the problem you're working on first, and then optimize it to a lower-level language.

Personal tools