Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version. Compare with Current ·  View Page History

This page serves as a wishlist for FireCloud features needed by the Broad GDAC to match the productivity of FireHose.

Requested Features

Data Model

  • More sophisticated complex expressions
    GDAC data is notoriously messy, and certain algorithms are generalized across a variety of data subtypes. For this reason, task configurations in FireHose had MVEL expressions that would, among other things, allow "choosing" between two possible inputs, and FireHose would figure out how to map the correct inputs to the workflow language. In FireCloud, we only have simple expressions, either literals, attributes, or attributes of members in a set. (like this: "Value", "this.value", "this.samples.value"). Having only these features necessitates ugly (and hard to maintain) hacks, such as spinning up a VM to choose between input files as a task, or adding extra optional inputs to WDL and parsing as the first step in a task.


  • "Map" data type

    A common use case in FireHose is to select input files from samples in a sample set, and pass these files to an analysis via a two-column tsv file that maps sample ids to the data files. An analagous method exists in FireCloud, allowing you to accept as input an array of sample ids and an array of the data files. The problem arises when the data is sparse – the two arrays are no longer parallel, and the mapping is broken. From a task-authors perspective, this could be solved if there was a Map data type in WDL. In Firecloud, you could pass the input as ">", and not require any sort of Null sentinel value in the bucket. 

  • Compostability/ Imports

    Different workflows will often share common tasks, such as a preprocessor or a report generator. Since WDLs are not composable, each workflow must independently maintain a copy of the task. A temporary solution used by gdac-firecloud is to use a script to sync task definitions within the repository, but this has limitations. First, it only works with tasks defined in the repository. It also requires manual intervention by the workflow developer, via 'make sync'. Import statements are currently part of the WDL spec, but are not implemented in FireCloud or any of the development tools (i.e. Cromwell, wdltool, etc.). 


  • Task outputs that are intermediate pollute the Google bucket

    Francois brought this one to my attention – In a multi-task workflow, files can be passed from task to task by making them outputs of the task. But often, this intermediate file is not useful once the final step in the workflow has been run. However, these files must still be output into the Google bucket, and assigned a place in the data model, thereby polluting the workspace and google bucket. FireCloud needs a way to specify that an intermediate file does not need to be in a workflow output, or allow workflows to specify outputs explicitly.

Developer Tools

  • Debugging
    • Running an on-prem instance
    • SSH access to observe running jobs, look at files in a workspace
    • Fail-Fast --> Don't let me submit a workflow that has invalid inputs


  • Long google bucket paths are unreadable, abbreviate to file basenames since in most cases, the directory structure is irrelevant, especially from the Data Model's perspective.


  • Accurate Documentation
    The current api page is intended to list all available orchestration (read user-available) api calls. But this page is either incomplete, or does not accurately represent the available API calls. On a couple occasions, Alex has directed me to an API endpoint that is not listed on this page, but is listed on the page for the service (Rawls, Agora, etc.) at . Some of these APIs are publicly accessible as pass-throughs, but the distinction has not been made clear.
  • Get latest snapshot of Method or Config
    Some endpoints require a snapshot_id (e.g.!/Method_Repository/getMethodRepositoryConfiguration), but there isn't an easy way to figure out what the latest snapshot id is without retrieving all snapshot_ids and performing a max check. For these endpoints, there should be a 'latest' option, or the snapshot_id should be optional, and the latest version is retrieved.


  • Need a reliable way to kill runaway workflows. There are many reasons the workflows can appear to hang (Docker issue, inadvertent forever loop, services miscommunicating). I still have a test_gistic workspace that claims to be running, but I have no idea if that means there is a Google VM still running, or if Firecloud is just confused.