The created CoreML Model did not classify images correctly

I completed the course on CoreML. Although the created CoreML model (CatOrDog) worked correctly in the “Preview” of “CreateML”. However, after it is put into the Xcode project (also called CatOrDog) and replaced the “MobileNetV2” in “Animal.swift”, it did not classify the images correctly, it returned “Cat” with high confidence all the time. It happened even I used all the files provided by the course. Please help.

I submitted this question for a week and still waiting for reply. Please help.

I guess the reality is that not many of us (if any at all) have any experience with CoreML hence the lack of a reply. It’s a niche area unfortunately.

Just for that sake of it I just followed that course and when I changed to the CatOrDog ml file I discovered that it would not work on my simulator but would work just fine on my real device.

When targeting the simulator the console was displaying the message “Failed to get the home directory when checking model path.” So I have no clue yet as to why that is happening.

With regard to the issue that you are having with it always thinking that it is a cat when it should be a dog I can only assume that something has gone wrong in the model training process. My model is accurately assessing whether the animal is a cat or a dog.