Performance optimization has the following problems:
Loss of code readability. The most heavily optimized version of a solution does not take advantage of code reuse, functions are not dynamically allocated, constants are preallocated outside of the running function, and iteration is done using low-level looping constructs and mutable variables. All of which make the code verbose, more error-prone, more spread out, and the original problem harder to grasp.
Loss of code reusability. The greatest gains in optimization come from the assumptions you are able to make about your input and the assumptions and use of your output. This means that to maximize an optimization you have to fully understand the constraints of your problem. In other words, when the problem changes your optimization will most likely need reworking or to be completely overhauled. Meanwhile, you have solved a specific problem but a very similar problem with even slightly different assumptions cannot borrow your efforts.
Loss of time. Optimization is hard and takes a lot of thinking and research into the problem and its constraints. And this is only the initial cost. Code that is more verbose and complex, taking into account many potentially obscure assumptions, is a tough read and even harder to update.
When I write code, the things I value are business value, testability, readability, and correctness, in that order. Performance as a concern is dead last. Before I dip into performance optimization of any code, it must meet the following criteria:
The perceived business value is worth spending the time and effort to optimize. If you are keen to weight the value against the long term maintenance cost of that optimized code as well as the initial cost: good job. Most optimizations never pass this test.
There is a well-defined performance problem. Just as business value is perceived, so is performance. Taking the time to define what performance means in a tangible way is essential to solving the problem. Should we be refactoring to perform fewer object comparisons or refactor to show a loading indicator or perhaps stream results so the user perceives progress or feedback sooner. What tradeoffs can we make: should time complexity be optimized over space complexity on limited devices, and how active is our underlying data as that dictates our caching mechanisms. Ideally the goals reduce down to numbers you can look at and aim for so you know at the end the problem is solved.
The existing solution must be well defined and correct. This means that all of the assumptions we can make about the inputs and outputs are articulated and preferably codified as test cases. The test harness should come before the reimplementation because code reuse, readability, and time are better respected by a non-optimized solution. This usage contract is a snapshot of the functionality the optimized version must uphold. This is important because when optimizing for performance we never want to impede correctness as it is much more important. With incredible luck, these tests may already exist. If they do not, factor this task into the cost of the optimization.
With the above criteria met, I will happily optimize. The great things about taking this stance are:
The criteria are almost never met. Looking the cost of an optimization in the face, most businesses can do without having something work 100 milliseconds faster.
Fulfilled criteria are a roadmap to a very good solution. The research has been done: the constraints, tradeoffs, and goals are known. The roadmap also has the maintenance benefit of being an artifact, if written down, for future maintainers to look at and understand why choices were made the way they were, cushioning the blow to the problems optimization cause.
The criteria take time to fulfill. Every day there is a new problem to solve. Fulfilling these criteria means in itself that due diligence went into the decision to optimize. The decision was not made on a whim, which means resources are more intelligently allocated to serious problems.
In summation, optimization has costs of time, reusability, and readability. Before embarking to optimize I always want to know the optimization has business value, what my constraints are to solve the problem, and when I can know I have solved the problem. When I know all of that, the problem is easier to solve, the problem must be worth solving, and that gathered information has reduced maintenance costs for the optimized solution. Or better yet, we find the non-optimized solution is already good enough.
I hope my reasoning resonates with you. Thank you for reading.