Skip to content
This repository has been archived by the owner on Oct 18, 2023. It is now read-only.

Research if we can make TF prediction more user friendly by packaging model and its metadata #15

Open
ravwojdyla opened this issue Mar 12, 2018 · 1 comment
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested

Comments

@ravwojdyla
Copy link
Contributor

Right now TF prediction is rather cumbersome and error-prone, it includes:

  • specify input and output operations
  • shaping of the input Tensors
  • reshaping of output Tensors

Research if we could provide an easy path from https://github.com/spotify/spotify-tensorflow to package a model/graph together with all the necessary metadata so that users don't need to worry about the low level TensorFlow constructs like operations, shapes and tensors in zoltar. This approach could cover 80% of use cases, we should still allow for a completely custom prediction.

@ravwojdyla ravwojdyla added enhancement New feature or request help wanted Extra attention is needed question Further information is requested labels Mar 12, 2018
@richwhitjr
Copy link

Wonder if there is a way to pair a TF Estimator with equivalent JVM prediction code. So if you want to use say a pre-canned logistic regression estimator in normal TF you can call into the spotify JVM pre-canned logistic regression predictor.

Most of those at a high level though are just Float[] -> Float[] as the interface anyway as prediction so maybe not worth the extra effort. We can almost guarantee that interface since you can reshape the vector into a arbitrary tensor prior to saving the model for serving using the export call.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants