One of the basics is that it attempts to determine the best result by using a massive amount of data. The issue I am having is that I need to quantify the results however with SQA the results are usually binary (Pass or Fail) with very little area for wiggle room.
Does anyone have any experience with quantifying results beyond pass or fail?
I was considering using machine learning to optimize test runs, in essence to say ‘Using x data exercises 20% more of the application than y’, or potentially to attempt to achieve 100% code coverage through unit tests.
To ask a clear question:
Is there any way to quantify the results from a test beyond pass or fail, if so, how have you either implemented this or seen this implemented?