The created CoreML Model did not classify images correctly

I completed the course on CoreML. Although the created CoreML model (CatOrDog) worked correctly in the “Preview” of “CreateML”. However, after it is put into the Xcode project (also called CatOrDog) and replaced the “MobileNetV2” in “Animal.swift”, it did not classify the images correctly, it returned “Cat” with high confidence all the time. It happened even I used all the files provided by the course. Please help.

I submitted this question for a week and still waiting for reply. Please help.

I guess the reality is that not many of us (if any at all) have any experience with CoreML hence the lack of a reply. It’s a niche area unfortunately.

Just for that sake of it I just followed that course and when I changed to the CatOrDog ml file I discovered that it would not work on my simulator but would work just fine on my real device.

When targeting the simulator the console was displaying the message “Failed to get the home directory when checking model path.” So I have no clue yet as to why that is happening.

With regard to the issue that you are having with it always thinking that it is a cat when it should be a dog I can only assume that something has gone wrong in the model training process. My model is accurately assessing whether the animal is a cat or a dog.

I have the exact same problem (Xcode 15.2), where it’s constantly saying everything is a cat.

Also, I would get the error Could not create inference context unless I added

    request.usesCPUOnly = true

since the simulator doesn’t support the Neural Engine. However, this is deprecated in iOS17, and I’m not sure what to replace it with.

Nobody else has ideas on what’s gone wrong here?

Xcode 15 has two different versions of the image trainer, v1 and v2. Both do the wrong thing, but v2 says everything is 100% a cat, as opposed to v1 which said everything was 80-90% a cat. Not sure how that’s an improvement! LOL.