by Forrest Sheng Bao http://fsbao.net
In AI, planning is a pain in the ass, because automated planner is not practical so far. We want the planner to find a rational solution of our problem. Then it takes a lot of time to find the rational plan. For example, it may take much longer time for an automated planner to find a plan to land several incoming jet in even a small airport than a human navigator. Aviation fuel is very expensive and if a jet run out of gas it will crash.
What if we allow the planner to find an incorrect plan some time? I mean, humanly. Human beings make mistakes. Rationality is hard for us. Otherwise, we wouldn't create the word ``stupid,'' at least in Chinese, English and German. But we have been leaving with it for at least thousands of years according to documented history.
If we could find a proper balance between computing time and correctness of the solution, then thinking humanly could be a better choice for automated planning.
One step forward, do we really think (e.g., logic reasoning, satisfiability checking) when making a plan? If not, then we need to teach computers using our way to find a plan rather than teach them a new fancy way called ``think/act rationally.''
I think, therefore, I am? No, I am a human, therefore, I am.