No, at the time of this writing. As has been related to me by professionals in the field, the best litmus for whether a problem cannot be is if it’s AI-complete — that is, if automating the task is at least as hard as solving the central artificial intelligence problem, making computers as intelligent as people (further reading).

Within this framing, consider several of the other answers to this question. Many of them hinge on problems that humans are very good at solving that computers are not, including large, multidimensional fuzzy matching and searching problems and problems bounded by EXPtime or EXPspace (such as a deterministic solver for the game Go, to derive the set of correct solutions an algorithm should come to). Humans are still imperfect at these problems, but their ability to make expert decisions and recognize patterns is still better than the cutting edge in AI research at the time of this writing.

Using this definition, you should be able to elegantly divide the problem space into automatable tasks and those that require some degree of manual intervention. From here, I would combine the former set (by analogy and design principles left outside the scope of this answer) and prune the latter set until you’ve arrived at a convincing and implementable set of tests for your system.

Good luck!

Source link


Please enter your comment!
Please enter your name here