Reading through an article (We’re not prepared for the end of Moore’s Law @MIT Technology Review) this morning, and reminded me of an argument that constantly came up during my time in the games industry. We would spend a lot of engineering time to get the most of the fixed hardware that we had on a console. It was imagined that in time, the power of the platform would mean that we would spend less time to maximize performance and more time on maximizing engineering time. We have definitely seen that evolution from the first days of console software writing. The choice that many people have taken to use Unity or Unreal is a great example of this type of decision. A custom purpose engine could see better performance and thus, more functionality, but the increase in sales for the needed engineering effort does not make rational sense. Anyways, quoting from the article (regarding the death of Moore’s law).
One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.
Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.
Makes me wonder, if we will see a return (full circle) to this debate of engineering effort vs value per watt of CPU time/execution.