Heroku is a cloud platform as a service supporting several programming languages, including Python. Heroku is the default option for model deployment as a web service for whisk. In most cases, Heroku is free for proof-of-concept models and a low-cost, less complex approach compared to solutions like AWS Sagemaker in the long-run.
To deploy the web service residing in the
app/ directory to Heroku, type
whisk app create [NAME OF THE HEROKU APP].
Heroku ML Model Gotchas¶
Your project may require a couple of slight modifications to successfully deploy to Heroku. Below is a list of common issues.
Max Slug Size Limit¶
Heroku has a maximum slug size of 500 MB (after compression). If you project contains large dependencies (like Tensorflow), you could exceed this limit.
If your app exceeds the maximum slug size, try using cascading
requirements.txt files similar to this Stack Overflow answer. For example:
|-- requirements | |-- common.txt <- Contains requirements common to all environments. | |-- dev.txt <- Specifies dev-only requirements and requires common.txt. | `-- prod.txt <- Specifies Heroku-only requirements and requires common.txt. `-- requirements.txt <- Requires requirements/prod.txt as Heroku looks for this file.
The Tensorflow library is greater than 500 MB and exceeds the Heroku slug size limit by itself. Use
tensorflow-cpu as it consumes < 150 MB of disk space. Heroku also does not offer GPUs so there is loss in functionality.
If you are using NLTK, add a
nltk.txt file to the project root directory with a list of corpora to download. See the Heroku NLTK docs for more information.
The project contains a
.slugignore file that removes the
notebooks/ directories before the buildpack runs. This reduces the slug size.