Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
This repository was archived by the owner on Jan 22, 2021. It is now read-only.
/rainbowPublic archive

Use Watson Visual Recognition and Core ML to create a Kitura-based iOS game that has a user search for a predetermined list of objects

License

NotificationsYou must be signed in to change notification settings

IBM/rainbow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WARNING: This repository is no longer maintained.

This repository will not be updated. The repository will be kept available in read-only mode.

This code pattern is an iOS timed game that has users find items based on a list of objects developed for Apple phones. It is built to showcase visual recognition with Core ML in a fun way. This project repository consists of an iOS app and a backend server. Both components are written in the Swift programming language and leverages the Kitura framework for the server side. Cloudant is also used to persist user records and best times, and Push Notifications are used to let a user know when they have been removed from the top of the leaderboard.

Our application has been published to the App Store under the nameWatsonML, and we encourage folks to give it a try. It comes with a built-in model for identifying six objects; shirts, jeans, apples, plants, notebooks, and lastly a plush bee. Our app could not have been built if not for fantastic pre-existing content from other IBMers. We use David Okun's Lumina project, and Anton McConville's Avatar generator microservice, see the references below for more information.

We include instruction on how to modify the application to fit your own needs. Feel free to fork the code and modify it to create your own conference swap game, scavenger hunt, guided tour, or team building or training event.

When the reader has completed this Code Pattern, they will understand how to:

  • Create a custom visual recognition model in Watson Studio
  • Develop an Swift based iOS application
  • Deploy a Kitura based leaderboard
  • Detect objects with Core ML and Lumina

Sample output and gameplay

Flow

  1. Generate a Core ML model using Watson Visual Recognition and Watson Studio.
  2. User runs the iOS application for the first time.
  3. The iOS application calls out to the Avatar microservice to generate a random username.
  4. The iOS application makes a call to Cloudant to create a user record.
  5. The iOS application notifies the Kitura service that the game has started.
  6. The user aims the phone's camera as they search for items, using Core ML to identify them.
  7. The user receives a push notification if they are bumped from the leaderboard.

Included components

  • Core ML: Is a framework that will allow integration of machine learning models into apps.
  • Lumina: Lumina is an open-source Swift framework that allows you to stream video frames through a Core ML model and get instant results.
  • Kitura: Kitura is a free and open-source web framework written in Swift, developed by IBM and licensed under Apache 2.0. It’s an HTTP server and web framework for writing Swift server applications.
  • Watson Visual Recognition: Visual Recognition understands the contents of images - visual concepts tag the image, find human faces, approximate age and gender, and find similar images in a collection.

Featured technologies

  • Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
  • Mobile: Systems of engagement are increasingly using mobile technology as the platform for delivery.

Prerequisites

The following are prereqs to start developing the application

  • Xcode
  • IBM Cloud account
  • Carthage: Download the latest release underDownloads selectCarthage.pkg and install it.

Steps

Setting up your iOS app

Clone the project

git clone https://github.com/IBM/rainbow/

Build the project by navigating to theiOS folder on your terminal, and typing:

carthage update --platform iOS

Setting Up YourKitura Server

  1. Go to theIBM Cloud console, and clickCreate Resource.

  2. Search for "Cloudant NoSQL DB" and create a service. Take note of the name of the created service.

  1. Go to your Cloudant service home page, and click the greenLaunch button. Click the database icon on the left, and along the top, clickCreate Database. Name itroutes-users, and clickCreate Document. Edit your JSON to include a"username" and a"password" of your choosing. If you do this, you will need this when you set up your iOS application.

  2. Do the same thing for a Push Notifications service as well.

To set up push notifications with your app, you will need to follow theguide for embedding them into your app

  1. After cloning this repository, go toServer/ from the terminal.

  2. Runswift package generate-xcodeproj which creates therainbow-server.xcodeproj file.

  3. In theconfig/ directory, find the filelocaldev-config.json that looks like so:

    {"Cloudant NoSQL DB-kl": {"username":"hot","password":"dog","host":"nothotdog","port":443,"url":"hotdog url"    },"rainbow-server-PushNotifications-r6m1": {"appGuid":"hotdog guid","url":"hotdo url","admin_url":"hotdog admin url","appSecret":"hotdog","clientSecret":"not hotdog"    }}

    Update the credentials for the Push Notification and Cloudant service inlocaldev-config.json. You will also want to make sure that the names are also correct inmappings.json.

  4. Open the project using Xcode by running:open rainbow-server.xcodeproj.

  5. You can build and run the project in Xcode, or use the accompanyingrunDocker.sh script to test the app in a Docker container.

Setting Up Server/Client Credentials

Though the Visual Recognition component of this application does not require API authentication, they are required if you decide to save your high scores to the API. If you created a username and password in your Cloudant database, complete the following steps:

  1. Open up the Xcode project for your iOS application.

  2. In theModel folder, create a file calledWatsonMLClientCredentials.json.

  3. For thecloudant node, update the username and password with the service credentials you installed inlocaldev-config.json for your server.

  4. For theroutes node, update the username and password with the service credentials you created in the database in the server set up tutorial.

From this point forward, you should be able to make valid calls to your Kitura API.

Build Your Own Model

For this, you should pick a theme and set of items -- museum pieces, office hardware, conference booths, whatever you want! As an example, we'll use fruits, and make a model that can distinguish between 3 fruits: apple, pear, and banana.

  1. Take lots of photos of each of them, and organize each set of at least 10 photos into their own folders. Zip each of them up so you have:a.Apple.zipb.Pear.zipc.Banana.zip

  2. If you have already created an account onIBM Cloud, then go toWatson Studio and log in with the same credentials.

  3. Click theNew Project button, then click theVisual Recognition option, then clickOK.

  1. Pick a project name and a description. If you haven't already created a Cloud Object storage instance, the service should create one for you. ClickOk.

  2. Look on the right hand side of the screen: you should see a label that says "Upload to project". Select all of the.zip folders you previously created and let the folders upload.

  3. As the folders upload, drag each of them to the center of the screen, and the classes should be automatically created for you.

  4. As a bonus, add as many photos as you can to the "Negative" training class. In this example, try to add as many photos as you can that resemble anything that is not an object you want to recognize. In our example, this could be an orange, grapes, or another fruit.

  5. Click theTrain Model button. Go get a cup of coffee while you wait for this to finish.

  6. When you refresh the page, click the name of the model underneathVisual Recognition Models. Click theImplementation tab, and then click theCore ML option. Download the model that it tells you to download.

  7. Replace the model atiOS/rainbow/Model/ProjectRainbowModel_1753554316.mlmodel with the model you just downloaded.

  8. Update the JSON file that lists the objectsiOS/rainbow/Config/GameObjects.json

  9. If you need icons check outhttps://thenounproject.com/ - you'll want to find both a colored and white icon for each item!

Testing The App

You should be able to build and run this app on your device by now. Try to hold the "camera" tab in front of one of the objects, and if it detects the object successfully, you are in the clear!

Links

Learn more

  • Artificial Intelligence Code Patterns: Enjoyed this Code Pattern? Check out our otherAI Code Patterns.
  • AI and Data Code Pattern Playlist: Bookmark ourplaylist with all of our Code Pattern videos

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to theDeveloper Certificate of Origin, Version 1.1 (DCO) and theApache Software License, Version 2.

Apache Software License (ASL) FAQ


[8]ページ先頭

©2009-2025 Movatter.jp