|WikiProject Computer science||(Rated C-class, High-importance)|
Better title needed. The word "efficient" is a common one used in many different contexts. --mav
What about 'efficient' and 'effective' The german wikipedia makes a distinction between both. If there is such a distinction in english (I do not know, I am no native speaker) then you should probably mention it.
moved this from the article:
- The term "efficient" is very much confused and misused with the term "effective", though a scientific impact takes place. Efficiency is a measureable, quantitative concept, given by the equation: Efficiency = Output/Input (which is same as the concept productivity); or alternatively Efficiency = Output/Predetermined expectation. Whereas Effectiveness is a vague, almost non-quantitative concept, mainly concerned with achieving objectives. --220.127.116.11 08:01, 17 November 2005 (UTC)"
A common assumption is that there is always a tradeoff between time and space in an algorithm, but that is only true if it is actually on the optimal curve of time vs space. It is quite possible (and common) for any given algorithm to be quite non-optimal, in which case both its time and space consumption can be reduced, bringing it closer to the optimal curve. MikeDunlavey 19:20, 29 May 2007 (UTC)
Looking through the history, this used to be a good page. But it's now bloated with dozens semi-related topcs (e.g., the compiler and hardware optimizations). Those topics all have their own pages; they don't belong here. 18.104.22.168 (talk) 20:20, 14 May 2010 (UTC)
- Agree completely, anything related to optimisation shuold be elsewhere. Murray Langton (talk) 05:32, 15 May 2010 (UTC)
- When discussing the efficiency of algorithms in general, it would be extremely dumb to only consider what can be done to improve a pre-existing algorithm. Are you suggesting that programs should be produced first, without any regard for how well they are likely to perform, and then - once they are "working" - go back and see how they can be improved by optimization? If you are, then consider this:-
- Imagine a 10 man-year project with 10 programmers working on it, blindly coding away for 9 months, only to later discover - during testing - that it all ran much too slowly and had to be completely re-written to improve its efficiency. The oft-repeated phrase "premature optimization" has a lot to answer for, and has been used in isolation and completely out of cotext to justify not caring how efficient an algorithm is from the start. As for "semi-related topics", when it comes down to efficiency, all things have to be considered simultaneously. A racing car designer has to consider tyres, suspension, engine capacity, fuel, airflow, safety, gears, brakes as a complete package (not forgetting the driver and his interfaces). Similarly, a software engineer should consider all "semi-related" but contingent topics during the design phase or pay the price later. The good software engineer should, just like his mechanical counterpart, design and build optimally. There are rare exceptions, such as one-off programs - but even these have an inconvenient habit of being re-used again and again. —Preceding unsigned comment added by Kdakin (talk • contribs) 06:01, 8 June 2010 (UTC)
Error (I think)
- For example, a condition might test patients for (age > 18) before testing (blood type == 'AB-') because this type of blood occurs in only about 1 in 100 of the population. This would eliminate the second test at runtime in 99% of instances; something an optimizing compiler would almost certainly not be aware of - but which a programmer can research relatively easily even without specialist medical knowledge.
I think this is the wrong way around isn't it? The blood type check should be done first to avoid the age test (the second part of the statement seems to suggest it is meant this way around). But maybe I am just confused. --22.214.171.124 (talk) 16:23, 21 June 2012 (UTC)
This section claims that advances in FFT algorithms 'may' increase processing speeds by a factor of 10,000 or so. Not only is this incorrect, but it's also mentioned nowhere in the articles it references (reference 13 as of this reading). While it seems to be a good article to link to on algorithmic efficiency, the comment written on Wikipedia about the article seems blatantly wrong. Or is, at the least, too unsupported for me to verify when checking both the attached paper from MIT and the news article on MIT's website. — Preceding unsigned comment added by 126.96.36.199 (talk) 18:48, 18 January 2013 (UTC)
- Essentially this implies that there is no automated method that can produce an optimum result and is therefore characterized by a requirement for human ingenuity or Innovation.
This is nonsense.
A computer is perfectly capable of systematically searching for the shortest representation of some input in some specific language (see also superoptimization), and informing the user of the best result found so far (i.e. an upper bound on the Kolmogorov complexity). Since the search space is finite, given sufficient time and memory it will eventually find and output an optimal solution. However, once it has done so, it may nonetheless continue to run forever evaluating some remaining shorter candidate representations which, unknown to the algorithm, will in fact never halt. This is the reason the Kolmogorov complexity function is officially not computable: it is not guaranteed to be able to determine its best solution so far is optimal, which is required to be able to claim it has calculated the Kolmogorov complexity.
There is no requirement for human ingenuity or innovation here, just a requirement for human impatience to declare the latest solution to be good enough, and abort the (potentially futile) search for a better one. Note that in reality, there is probably also a limit on how long the evaluation of a representation is permitted to take anyway, and if such a limit is imposed then finding the optimum is computable.
Over the next few month or so I intend to do a major rewrite on this article.
The most major change will be to remove all the optimization techniques which properly belong elsewhere; if they are not in the other relevant articles then I will move them there.
Once I've got rid of these, I'll then move on to the actual rewrite, hopefully trying to add references as I go.
Any comments on the above plan, please make them soon.
Gal-Ezer and Zur
why are they quoted on the opening paragraph? who heard of them? and what the point of the quote itself? telling the obvious, that efficiency is important? — Preceding unsigned comment added by Uziel302 (talk • contribs) 02:02, 20 August 2013 (UTC)
Decimation of original article
Many of the highly relevant points in the original article have been completely removed. The latest offering for example assumes that it is improper to consider optimal methods before and during program creation and instead relegate these to "optimization" (i.e. a later "fix") articles. The latest offering seems to suggest that human inventiveness can be superseded by computation in finding a faster algorithm. Clearly he is an AI believer (and wrong)! — Preceding unsigned comment added by 188.8.131.52 (talk) 09:36, 1 June 2014 (UTC)
- The title of this article is 'Algorithmic efficiency', which implies it is about the efficiency of algorithms. There are lots of separate articles related to 'program optimisation', so matters relating purely to optimisation have been removed from the article. According to Knuth: "Premature optimization is the root of all evil (or at least most of it) in programming." Murray Langton (talk) 08:25, 2 June 2014 (UTC)
Should there not be a section of energy complexity? See for instance Roy et al (2012). A google search will also reveal a lot of hits on the term "algorithmic complexity energy". I would add this material in myself but it is a bit outside my area of expertise. Regards. RobbieIanMorrison (talk) 12:13, 13 July 2016 (UTC)