• chevron_right

      Setting our heart-attack-predicting AI loose with “no-code” tools

      news.movim.eu / ArsTechnica · Tuesday, 9 August, 2022 - 13:00 · 1 minute

    Ahhh, the easy button!

    Enlarge / Ahhh, the easy button! (credit: Aurich Lawson | Getty Images)

    This is the second episode in our exploration of "no-code" machine learning. In our first article , we laid out our problem set and discussed the data we would use to test whether a highly automated ML tool designed for business analysts could return cost-effective results near the quality of more code-intensive methods involving a bit more human-driven data science.

    If you haven't read that article, you should go back and at least skim it . If you're all set, let's review what we'd do with our heart attack data under "normal" (that is, more code-intensive) machine learning conditions and then throw that all away and hit the "easy" button.

    As we discussed previously, we're working with a set of cardiac health data derived from a study at the Cleveland Clinic Institute and the Hungarian Institute of Cardiology in Budapest (as well as other places whose data we've discarded for quality reasons). All that data is available in a repository we've created on GitHub, but its original form is part of a repository of data maintained for machine learning projects by the University of California-Irvine. We're using two versions of the data set: a smaller, more complete one consisting of 303 patient records from the Cleveland Clinic and a larger (597 patient) database that incorporates the Hungarian Institute data but is missing two of the types of data from the smaller set.

    Read 38 remaining paragraphs | Comments

    • chevron_right

      No code, no problem—we try to beat an AI at its own game with new tools

      news.movim.eu / ArsTechnica · Monday, 1 August, 2022 - 13:00 · 1 minute

    Is our machine learning yet?

    Enlarge / Is our machine learning yet?

    Over the past year, machine learning and artificial intelligence technology have made significant strides. Specialized algorithms, including OpenAI's DALL-E, have demonstrated the ability to generate images based on text prompts with increasing canniness. Natural language processing (NLP) systems have grown closer to approximating human writing and text. And some people even think that an AI has attained sentience . (Spoiler alert: It has not .)

    And as Ars' Matt Ford recently pointed out here , artificial intelligence may be artificial, but it's not "intelligence"—and it certainly isn't magic. What we call "AI" is dependent upon the construction of models from data using statistical approaches developed by flesh-and-blood humans, and it can fail just as spectacularly as it succeeds. Build a model from bad data and you get bad predictions and bad output—just ask the developers of Microsoft's Tay Twitterbot about that.

    For a much less spectacular failure, just look to our back pages. Readers who have been with us for a while, or at least since the summer of 2021, will remember that time we tried to use machine learning to do some analysis—and didn't exactly succeed. ("It turns out 'data-driven' is not just a joke or a buzzword," said Amazon Web Services Senior Product Manager Danny Smith when we checked in with him for some advice. "'Data-driven' is a reality for machine learning or data science projects!") But we learned a lot, and the biggest lesson was that machine learning succeeds only when you ask the right questions of the right data with the right tool.

    Read 26 remaining paragraphs | Comments