Compute API - Description

The Compute API is a library for working with DCP, the Distributed Compute Protocol, to perform arbitrary computations on the Distributed Computer.

Record of Issue

Date Author Change
May 6 2020 Wes Garland Stamp Compute API 1.5.2
May 5 2020 Nazila Akhavan Add getJobInfo, getSliceInfo
Feb 24 2020 Ryan Rossiter Add job.status, job.runStatus, clarify marketRate
Dec 9 2019 Ryan Rossiter Update requirements object property descriptions
Oct 8 2019 Jason Erb Clarified distinction between Worker and Sandbox
Sep 23 2019 Wes Garland Compute API 1.51 Release. Improved language.
Sep 20 2019 Wes Garland Compute API 1.5 Release.
Improved vocabulary, minor document reorg, Application->Appliance, Appliance re-work, job costing details, Scheduler class, Scheduler explanation, Job Lifetime, ENOFUNDS refinement
Jul 15 2019 Wes Garland Glossary update; Generator->Job, Worker Thread->Sandbox, Miner->Worker
Feb 13 2019 Wes Garland - Added offscreenCanvas and sandboxing info to Worker Environment
- Added API compute.status
- Began enumerating Worker Capabilities
- Sparse and Output Set Range Objects
- Added job handle collateResults and results properties
- More work on task/slice differentiation
Nov 23 2018 Wes Garland - Deprecated Application
- Introduced Shared State and Access Keys
- Improved Task/Slice Differentiation/Composibility
Oct 31 2018 Wes Garland Initial Release
Oct 29 2018 Wes Garland Second Draft Review
Oct 23 2018 Wes Garland Moved toward generator-oriented syntax
Oct 19 2018   Wes Garland First Draft Review

Intended Audience

This document is intended for software developers working with DCP. It is organized as a reference manual / functional specification; introductory documentation is in the DCP-Client document.

Overview

This API focuses on jobs, both ad-hoc and from published appliances, built around some kind of iteration over a common Work Function, and events. The API entry points are all exports of the DCP compute module.

See Also

Implementation Status

As of this writing (September 2019), the Compute API is very much a “work in progress”, with core functionality finished and well-tested, but finer details unfinished or omitted entirely. This document intends to document the final product, so that early-access developers can write future-proof or future-capable code today.

Note: The list below is not complete, and may not be up to date. Caveat Developtor!

Implemented

Partially Implemented

Not Implemented

Definitions

About the Compute API

The compute module is the holding the module for classes and configuration options (especially default options) related to this API. Throughout this document, it is assumed that this module’s exports are available as the global symbol compute.

Most computations on the Distributed Computer operate by mapping an input set to an output set by applying a work function to each element in the input set. Input sets can be arbitrary collections of data, but are frequently easily-described number ranges or distributions.

Work functions can be supplied directly as arguments in calls to API functions like compute.do and compute.for, or they can be stored on the Distributed Computer in Appliances.

note - When work is a function, it is turned into a string with Function.prototype.toString before being transmitted to the scheduler. This means that work cannot close over local variables, as these local variables will not be defined in the worker’s sandbox. When work is a string, it is evaluated in the sandbox, and is expected to evaluate to a single function.

Jobs

Jobs associate work with input sets and arguments, and enable their mapping and distribution on the Distributed Computer. Jobs with ad-hoc Work functions can be created with the compute.do and compute.for APIs, or with previously-published work functions using appliances via the Appliance.prototype.do and Appliance.prototype.for methods.

Fundamentally, a Job is

The Scheduler on the Distributed Computer is responsible for moving work and data around the network, in a way which is cost-efficient; costs are measured in terms of CPU time, GPU time, and network traffic (input and output bytes).

Clients specify the price they are willing to pay to have their work done; Workers specify the minimum wage they are willing to work for. The scheduler connects both parties in a way that allows Workers to maximize their income, while still serving the needs of all users on the Distributed Computer. Essentially, the higher the wage the client is willing to pay, the more Workers will be assigned to compute the job.

Work Characterization

Work is characterized throughout the lifetime of the job. CPU time and GPU time are constantly normalized against benchmark specifications, and network traffic is measured directly. Work starts out as either uncharacterized or well-characterized.

Uncharacterized Work

Uncharacterized work is released slowly into the network for measurement. Funds are escrowed during characterization in a way that would pay for 95% of jobs recently-seen on the network, but work paid for at market rate. This means that about 95% of jobs will see a refund from escrow at the end of initial characterization (“estimation”), and about 5% of jobs will trigger and ENOFUNDS handler. If ENOFUNDS is triggered, the client can either cancel the job, or transfer more money into the escrow to cover the difference, and resume the job.

Well-Characterized Work

Well-characterized work can be deployed on the network more quickly, skipping the estimation phase. Client developers can specify work characterization (sliceProfile objects) while submitting work; these can be either directly specified or calculated locally via the job estimation facilities. Additionally, work originating from Appliances is always well-characterized.

Job Lifecycle

yes
no
yes
no
yes
no
yes
no
no
yes
no
yes
compute.for.exec
charge deployment fee
is job well characterized?
Main work begins
deploy a few slices at market rate and escrow enough DCC to compute 95% of slices
enough DCC for estimation?
estimation completes
ENOFUNDS event handler?
job.escrow && job.resume
compute.for.exec rejects
fixed rate per slice and fixed number of slices?
escrow enough funds for whole job
escrow enough funds for each task as it is handed out
have enough DCC in account?
ENOFUNDS event handler?
job.escrow && job.resume
all slices computed
compute.for.exec resolves
Done

Static Methods

compute.cancel

This function allows the client to cancel a running job. This function takes as its sole argument a Job id and tells the scheduler to cancel the job. This method returns a promise which is resolved once the scheduler acknowledges the cancellation and has transitioned to a state where no further costs will be incurred as a result of the job.

compute .do

This function returns a JobHandle (an object which corresponds to a job), and accepts one or more arguments, depending on form.

Argument Type Description
n Number number of times to run the work function
work String the work function to run. If it is not a string, the toString() method will be invoked on this argument.
arguments Object an optional Array-like object which contains arguments which are passed to the work function

compute.for

This function returns a JobHandle (an object which corresponds to a job), and accepts two or more arguments, depending on form. The work is scheduled for execution with one slice of the input set, for each element in the set. It is expected that work could be executed multiple times in the same sandbox, so care should be taken not to write functions which depend on uninitialized global state and so on.

Every form of this function returns a handle for a job which, when executed, causes work to run n times and resolve the returned promise with an array of values returned by work, indexed by slice number (position within the set), where n is parallel the number of elements in the input set.

When the input set is composed of unique primitive values, the array which resolves the promise will also have an own property entries method which returns an array, indexed by slice number, which contains a {key: value} object, where key is in the input to work, and value is the return value of work for that input. This array will be compatible with functions accepting the output of Object.entries() as their input.

The for method executes a function, work, in the worker by iterating over a series of values. Each iteration is run as a separate task, and each receives a single value as its first argument. This is an overloaded function, accepting iteration information in a variety of ways. When work returns, the return value is treated as result, which is eventually used as part of the array or object which resolves the returned promise.

note - When work is a function, it is turned into a string with Function.prototype.toString before being transmitted to the scheduler. This means that work cannot close over local variables, as these local variables will not be defined in the worker’s sandbox. They can be provided to the arguments argument and will be given to the work function after the iterator value. When work is a string, it is evaluated in the sandbox, and is expected to evaluate to a single function.

Argument Type Description
rangeObject Object see Range Objects, below
start Number the first number in a range
stop Number the last number in a range
step Number optional, the space between numbers in a range
work String the work function to run. If it is not a string, the toString() method will be invoked on this argument.
arguments Object an optional Array-like object which contains arguments which are passed to the work function

The promise is resolved following the same rules as in form 1, except the arrays/objects nest with each range object. (See examples for more clarity)

Future Note - form4 with an ES6 function* job argument presents the possibility where, in a future version of DCP, the protocol will support extremely large input sets without transferring the sets to the scheduler in their entirety. Since these are ES6 function* generators, the scheduler could request blocks of data from the client even while the client is ‘blocked’ in an await call, without altering the API. This means DCP could process, for example, jobs where the input set is a very long list of video frames and each slice represents one frame.

Result Handles

Result handles act as a proxy to access the results (a mapping from input set to output set) of a job. The result handle for a job is returned by exec, returned by the complete event, and located in job.results.
The result handle is an Array-like object which represents a mapping between slice number (index) and a result. Additional, non-enumerable methods will be available on this object to make marrying the two sets together more straightforward. These methods are are based on methods of Object.

When job.collateResults was set to true (the default), the result handle will be automatically populated with the results as they are computed. Otherwise when job.collateResults is false,
the fetch method will have to be used before results are available in the handle. Attempting to access a result before it is in memory will cause an error to be thrown.

There are 4 methods provided for accessing or manipulating results that are stored on the scheduler:

compute.status

This function allows the client to query the status of jobs which have been deployed to the scheduler.

compute.getJobInfo

This async function accepts job Id as its argument and returns information and status of a job specified with jobID.

compute.getSliceInfo

This async function accepts job Id as its argument and returns status and history of a slice(s) of the job specified with jobID.

compute.marketRate

This function allows the client to specify “going rate” for a job, which will be dynamically set for each slice as it is paid out.

NYI: As of this writing (May 2020), the production scheduler (DCP Scheduler v3) will use 0.0001465376 DCC per slice as the market rate, multiplied by the factor if provided. The API documented below reflects the behaviour of Scheduler v4.

compute.getMarketValue

This function returns a promise which is resolved with a signed WorkValueQuote object. This object contains a digital signature which allows it to be used as a firm price quote during the characterization phase of the job lifecycle, provided the job is deployed before the quoteExpiry and the CPUHour, GPUHour, InputMByte and OutputMByte fields are not modified. This function ensures the client developer’s ability to control costs during job characterization, rather than being completely at the mercy of the market.
Note: Market rates are treated as spot prices, but are calculated as running averages.

compute.calculateSlicePayment

This function accepts as its arguments a SliceProfile object and a WorkValue object, returning a number which describes the payment required to compute such a slice on a worker or in a market working at the rates described in the WorkValue object. This function does not take into account job-related overhead.

job.setSlicePaymentOffer(1.0001 * compute.calculateSlicePayment(job.initialSliceProfile, await job.scheduler.getMarketRate()))

Data Types

Range Objects

Range objects are vanilla ES objects used to describe value range sets for use by compute.for(). Calculations made to derive the set of numbers in a range are carried out with BigNumber, eg. arbitrary-precision, support. The numbers Infinity and -Infinity are not supported, and the API does not differentiate between +0 and -0.

Describing value range sets, rather than simply enumerating ranges, is important because of the need to schedule very large sets without the overhead of transmitting them to the scheduler, storing them, and so on.

Range Objects are plain JavaScript objects with the following properties:

When end - start is not an exact multiple of step, the job will behave as though end were the nearest number in the range which is an even multiple of step, offset by start. For example, the highest number generated in the range object {start: 0, end: 1000, step: 3} would be 999.

Sparse Range Objects

Range Objects whose values are not contiguous are said to be sparse. The syntax for specifying a sparse range object is

{ sparse: [range object, range object...]}

Any range object can be used in the specification of a sparse range object, except for a sparse range object.

Distribution Objects

Distribution objects are used with compute.for, much like range objects. They are created by methods of the set exports of the stats module, and are used to describes input sets which follow common distributions used in the field of statistics. The following methods are exported:

let stats = require('stats')
let job = compute.for(stats.set.poisson(10, 0.1, 100), (i) => i)

Job Handles

Job handles are objects which correspond to jobs, and are instances of compute.JobHandle. They are created by some exports of the compute module, such as compute.do and compute.for.

Properties

Present once job has been deployed

Methods

Events

The JobHandle is an EventEmitter (see EventEmitters, below), and it can emit the following events:

The jobHandle’s work property is also an EventEmitter, which will emit events from the work function. It can emit the following events:

SliceProfile Objects

SliceProfile objects are used to describe the estimated or real cost in CPU time, GPU time and network I/O of computing a slice of a given job. Each object has the following properties:

WorkValue Objects

WorkValue objects are used to describe the value, in DCC, ascribed to each element of a SliceProfile. These objects are used by workers to communicate their minimum wage to the scheduler, by the scheduler to communicate market rate quotes to the client, and so on. Each object has the following properties:

WorkValueQuote Objects

WorkValueQuote objects can be used as WorkValue objects, but they contain extra information and a embedded digital signature, identifying them as firm quotes from the underlying scheduler.
* quoteExpiry – an instance of Date; if this SlicePaymentDescriptor object is used as a market value quote, then the quote is valid until this timestamp.
* cpuHour – see WorkValue Objects
* gpuHour – see WorkValue Objects
* inputMBytes – see WorkValue Objects
* outputMBytes – see WorkValue Objects

Worker Environment

Work functions (i.e. the final argument to compute.for() are generally executed in sandboxes inside workers. These are the functions which map the input set to the output set.

Each work function receives as its input one element in the input set. Multi-dimensional elements, such as those defined in compute.for() form 3, will be passed as multiple arguments to the function. The function returns the corresponding output set element, and must emit progress events.

The execution environment is based on CommonJS, providing access to the familiar require() function, user-defined modules, and modules in packages deployed on Distributed Compute Labs’ module server. Global symbols which are not part of the ECMA-262 specification (such as XmlHTTPRequest and fetch) are not available.

Global Symbols

Appliances

Appliances are collections of one or more work functions that live on the Distributed Computer which, when combined with an input set, produce jobs. Since appliance functions are, by definition, well-characterized, jobs derived from appliances allow the Scheduler to skip the estimation phase. This, in turn, allows client developers to build applications with more deterministic costing and lower deployment latency.

Launching an appliance work function on the network is a two-step process. The developer creates and names the appliance and its functions and submits it to the Distributed Computer, along with an estimation function and sample input set for each work function. Once the appliance’s work functions have been fully characterized, the appliance becomes available for use by anyone who knows the appliance name.

Future versions of DCP will expand the Appliance concept to include the ability to support licensing and royalties. For example, developers will be able to publish appliances which

Appliance Handles

Appliance handles are objects which correspond to appliances. They are created by instantiating the compute.Appliance class.

Constructor

The Appliance constructor is an overloaded object which is used for defining/publishing appliances and referencing appliances on the scheduler.

new Appliance() - form 1 - preparing to publish

This form of the constructor is used for creating/publishing appliances. It accepts three arguments: applianceName, version, and publishKeystore.

new Appliance() - form 2 - published appliance

This form of the constructor is used to access functions which have already been published. It accepts two arguments: applianceName, and version.

Methods

For new appliances

For deployed appliances

Properties

These properties are optional, public, and probably displayed or used during mining:

Estimation Functions

Estimation functions are used by the scheduler to characterize slices derived from Appliances based on knowledge of the input set, without actually performing any work.

An estimation function receives as its arguments the first element in the input set for a given job, followed by any work function arguments. The function must return an object whose properties are numbers which represent linear scaling factors for the various resources (cpuHours, gpuHours and outputBytes) as defined in the baseSliceProfile. The inputBytes element is not used here as the Scheduler has the means to calculate that directly on a per-slice basis.

An undefined estimation function or estimation function result causes the work to deployed as though it came from an ad-hoc job.

Estimation functions which routinely yield slice characterizations bearing no resemblance to reality will eventually be blacklisted from the network; if the publishKeystore happens to be an Identity on the Distributed Computer, that person will be notified when the blacklisting happens.

SlicePaymentDescriptor Objects

SlicePaymentDescriptor are used to describe the payment that the user is offering to compute one (each) slice of the job. The Compute API defines three fixed value descriptors for use by DCP users; other descriptors can be specified as SlicePaymentDescriptor objects. The fixed value profiles are

SlicePaymentDescriptor objects have the following properties:

Any interface which accepts a SlicePaymentDescriptor object (e.g. exec()) must also handle literal numbers, instances of Number, and BigNums. When a number is used, it is equivalent to an object which specifies offerPerSlice. i.e., .exec(123) is the same as .exec({ offerPerSlice: 123 })

Shared State

There is an object which is available as the state property of the global worker object in sandboxes, and the state property of the job handle in clients. The data stored in this object is without restriction, except that

  1. It must be compatible with JSON
  2. Properties must not collide with methods defined by this specification

When the sandbox is first initialized for a given job, the object will be set to the current state of the object stored by the scheduler. The live object may be re-used by subsequent slices for the same job which are executed on the same worker, even if no synchronization methods were invoked.

Synchronization

The arbiter of state is the scheduler. Updates to the state object happen asynchronously on the network, but this API provides some synchronization primitives which are processed at the scheduler.

There is no synchronization by default. A worker that mutates the state object without invoking a synchronization method will not have its changes propagated back to the scheduler.

Events

The state object is an EventEmitter. In the client, this is set to the JobHandle. The object can emit the following events:

Note: The scheduler does not transmit state synchronization events to clients or workers that are not listening for them.

Example

const paymentAccount = keystore.getWallet()

let job = compute.for(1, 2000, (i) => {
  let test
  lt best = worker.state.best

  worker.state.addEventListener("change", () => best = worker.state.best)
  for (let x=0; x < i; x++) {
     test = require('./refiner').refine(x, i)
     if (test < best) {
       worker.state.best = test
       worker.state.set("min", "best")
     }
     progress(x/i)
  }
})
job.state.best = Infinity
let results = await job.exec()
console.log("The best result was: ", job.state.best)

Requirements Objects

Requirements objects are used to inform the scheduler about specific execution requirements, which are in turn used as part of the capabilities exchange portion of the scheduler-to-worker interaction.

let requirements = {
  environment: {
    offscreenCanvas: true,
    fdlibm: true
  },
  engine: {
    es7: true,
    spidermonkey: true
  }
}

Boolean requirements are interpreted as such:

In the example above, only workers with a GPU, running ES7 on SpiderMonkey using fdlibm library would match. In the example below, any worker which can interpret ES7 but is not SpiderMonkey will match:

let requirements = {
  engine: {
    es2019: true,
    spidermonkey: false
  }
}

Requirements Object Properties

EventEmitters

All EventEmitters defined in this API will be bound (i.e. have this set) to the relevant job when the event handler is invoked, unless the event handler has previously been bound to something else with bind or an arrow function.

The EventEmitters have the following methods:

Modules

The work specified by the JobHandle.exec and Appliance publish methods can depend on modules being available in the sandbox. This will be handled by automatically publishing all of the modules which are listed as relative dependencies of the job. Client developers can assume that dependencies loaded from the require.path are part of pre-published packages.

The DCP developer ecosystem offers the ability to run CommonJS-style modules and pure-ES NPM packages seamlessly and without transpilation steps on the following host environments

For more information, see the DCP Modules document.

Distributed Computer Wallet

The Distributed Computer acts as a wallet for two types of keys; “Identity” and “Bank Account”. Identity keys identify an individual; Bank Account keys identify accounts within the DCP bank and the DCC contract on the Ethereum network.

Additionally, there are Proxy Keys which can act as an Identity key, a Bank Account key, or both. These proxy keys are enumerable, revocable, and may have other types of restrictions, such as limiting use to a certain amount of DCC or a particular compute appliance.

The preferred way to exchange keys between DCP client appliance, configuration files, end users, etc, is to use encrypted keystores. Distributed Compute Labs strongly discourages developers from writing code which requires users to possesses private keys, or enter passphrases to unlock to non-proxy keystores.

For more information, see the Wallet API and Proxy Keys documents.

Example Programs

All example programs are written for operation within any of the environments supported by DCP-Client, provided they are surrounded by appropriate initialization code for their respective environments.

NodeJS

Note: You need to provide a folder ‘.dcp’ in your home directory and put your keystore file there. Then, rename it to default.keystore .

async function main() {
  const compute = require('dcp/compute')

  /* example code goes here */
}
const SCHEDULER_URLS = new URL('https://scheduler.distributed.computer';
require('dcp-client').init(SCHEDULER_URLS, true).then(main);

BravoJS

<html><head>
  <script src="https://scheduler.distributed.computer/bravojs/bravo.js"></script>
  <script src="https://scheduler.distributed.computer/etc/dcp-config.js"></script>
</head>
<body onload="module.main.main();">
<script>
  module.declare(['dcp-client'], function mainModule (require, exports, module) {
    async function main() {
      /* example code goes here */ 
    }
  })
</script>
</body>
</html>  

Vanilla Web

<html><head>
  <script src="https://scheduler.distributed.computer/etc/dcp-config.js"></script>
  <script src="https://scheduler.distributed.computer/dcp-client.js"></script>
</head>
<body onload="main();">
<script>
  async function  main() {
    const {compute, wallet} = dcp
    /* example code goes here */ 
  }
</script>
</body>
</html>  

1. compute.for() form 2b

let job = compute.for(1, 3, function (i) {
  progress('100%')
  return i*10
})
let results = await job.exec(compute.marketPrice)
console.log('results:    ', results)
console.log('entries:    ', results.entries())
console.log('fromEntries:', results.fromEntries())
console.log('keys:       ', results.keys())
console.log('values:     ', results.values())
console.log('key(2):     ', results.key(2))

Output:

results:     [ 10, 20, 30 ]
entries:     [ [ '1', 10 ], [ '2', 20 ], [ '3', 30 ] ]
fromEntries: { '1': 10, '2': 20, '3': 30 }
keys:        [ '1', '2', '3' ]
values:      [ 10, 20, 30 ]
key(2):      20

2. compute.for() form 1, step overflow

const paymentAccount = keystore.getWallet()
let job = compute.for({start: 10, end: 13, step: 2}, (i) => progress(1) && i))
let results = await job.exec()
console.log(results)

Output: [ 10, 12 ]

3. compute.for() form 1 with group

let job = compute.for({start: 10, end: 13, group: 2}, (i) => progress(1) && i[1]-i[0])
let results = await job.exec()
console.log(results)

Output: [ 1, 1 ]

4. compute.for() form 3

let job = compute.for([{start: 1, end: 2}, {start: 3, end: 5}], 
                       (i,j) => (progress(1), i*j))
let results = await job.exec()
console.log(results)

Output: [[3, 4, 5], [6, 8, 10]]

5. compute.for(), form 3

let job = compute.for([{start: 1, end: 2}, {start: 3, end: 5}], function(i,j) {
  return [i, j, i*j]
})
let results = await job.exec()
console.log(results)

Output: [[[1, 3, 3], [1, 4, 4], [1, 5, 5]], [[2, 3, 6], [2, 4, 8], [2, 5, 10]]]

6. compute.for() form 4

let job = compute.for([123,456], function(i) { 
  progress(1)
  return i
})
let results = await job.exec()
console.log(results)

Output: [ 123, 456 ]

7. compute.for(), form 4, using ES6 function* generator

function* fruitList() {
  yield "banana"
  yield "orange"
  yield "apple"
}

let job = compute.for(fruitList(), (fruit) => progress(1) && fruit + 's are yummy!')
job.requirements = { compute: { gpu: true } }
job.paymentAccount = protocol.unlock(fs.openFileSync('myKey.keystore')) // TODO: update this line
results = await job.exec()
console.log(results.join('\n'))

Output:

bananas are yummy!
oranges are yummy!
apples are yummy!

Some practical handers

job.public = { name: 'myTest' }                                // show 'myTest' on the slice progress bar in the portal.
job.contextId = 'testOne'                                      // save 'testOne' and the keystote used for deploying it.
job.work.on('console', (msg) => console.log(msg));             // show the console event inside the worker.
job.on('accepted', () => { console.log("Job accepted...") });  // log when job is accepted. 
job.on('status', (status) => { console.log(status)};           // log a status update.
job.on('cancel', (msg) => console.log(msg));                   // log when the job is cancelled.
job.on('result', (msg) => console.log(msg));                   // log when get a result.
});

8. Publish Appliance

let app = new compute.Appliance("videoProcessor", "1.0.0", identificationKeystore) 
app.requires("core")
app.defineFunction("enhance", ["./ffmpeg", "core/serializer"], enhanceFunction)
app.defineFunction("vingette", ["./ffmpeg", "core/serializer"], vingetteFunction)
let stabilize = app.defineFunction("stabilize", ["./ffmpeg", "core/serializer"],
                                   stabilizeFunction)
stabilize.requirements.machine.gpu = true;
let appRequestId = await app.publish()

9. Use Work Function from Appliance

let vp = new compute.Appliance("videoProcessor", "^1.0.0")
let job = vp.for(frames, "stabilize")
let results = await job.exec()

10. Typical ENOFUNDS hander

job.on("ENOFUNDS", (fundsRequired, slice, stage) => {
  console.log(`escrow ran out of money at stage '${stagte}'`)
  console.log('slice profile is: ', slice.profile)
  job.escrow(fundsRequired)
  job.resume()
})

11. Run at dynamic market rate

let job = compute.for(1,10, myFunction)
let results = await job.exec()  /* rejects when not enough DCC in escrow 
                                   because no ENOFUNDS handler */

12. Run at 2 DCC per slice with market rate estimation

let job = compute.for(1,10, myFunction)
let results = await job.exec(2)

13. Run at snapshot market rate with with dynamic market rate estimation

let job = compute.for(1,10, myFunction)
let results = await job.exec(compute.calculate(job.scheduler.getMarketRate(),
                             job.meanSliceInfo))

14. Run on scheduler with lowest market rate for this job

let job = compute.for(1,10, myFunction)
let alternateScheduler = new compute.Scheduler('https://url.of.other.scheduler') 
let results
let estCGIO = await job.estimate()

if (compute.calculateSlicePayment(estCGIO, await alternateScheduler.getMarketRate()) <
  compute.calculateSlicePayment(estCGIO, await scheduler.getMarketRate()) {
  job.scheduler = alternateScheduler
}
results = await job.exec(compute.marketValue)

15. Run at 25% over market rate or 20 DCC (whichever is least) per slice

let job = compute.for(1,10, myFunction)
let results = await job.exec(compute.marketValue(1.25, 20))

16. Run with total cost awareness

let estCostGuess, results
let totalCost
let scheduler = new compute.Scheduler()
let results
let startTime = Date.now()
let job = compute.for(1,10, myFunction)

depQuote = await scheduler.getDeploymentQuote(job) 
estQuote = await scheduler.getEstimationQuote(job)
estCostGuess = scheduler.estimationSliceCount * +estQuote
console.log(`deployment will cost ${+depQuote} DCC`)
console.log(`estimation will probably cost no more than ${+estCostGuess} DCC`)

job.on('resultAvailable', (cost, slice, readyState) => {
  totalCost += cost
})

job.on('readyStateChange', (readyState) => {
  if (readyState === 'estimated') {
    console.log('Estimation cost:', totalCost)
  } else {
    updatePaymentOffer(1.0)
  }
})

job.on("ENOFUNDS", (fundsRequired, slice, readyState) => {
  console.log(`job ran out of money at state '${readyState}'`)
  console.log('slice profile is: ', slice.profile)
  if (readyState === "estimation") {
    job.escrow(fundsRequired)
    job.resume()
  } else {
    console.log('quitting')
    job.cancel()
  }
})

function updatePaymentOffer(marketRatio) {
  let mv = await scheduler.getMarketValue()
  job.slicePayment = marketRatio * compute.calculateSlicePayment(job.meanSliceProfile, mv)
  console.log(`Set slice payment to ${job.slicePayment}`)
  console.log((job.status.slices.total - job.status.slices.distributed) + 
	          'slices remaining in the scheduler')
} 

setInterval(function matchPaymentToMarketValue() {
   if (Date.now() - startTime > 86400000)
     updatePaymentOffer(1.1)
   else
     updatePaymentOffer(1.0)
}, 600000)

await results = job.exec(0, estQuote, depQuote)