I’m at a small startup where we have an AI product. I am trying to get a robust accuracy test in place but I’m unclear as to what are best practices. Eg for every merge request, we test on a small subset of data, but for production merges do we test on all data? What if it gets better in some ways but worse in others?
I’m looking for references to describe best practices and/or (free) utilities that can help. Thank you!
submitted by /u/ayaPapaya
[link] [comments]
from Software Development – methodologies, techniques, and tools. Covering Agile, RUP, Waterfall + more! https://ift.tt/i4yeA0Y