Update and Schedule Models

When you create a model from a data source, or import data into an existing model, an import job is automatically created as well, so that you can schedule data imports.

Context

Schedule a data import if you want to regularly refresh data against the original data source.

See “Import Data Options” in step 3 of this help topic for details.

You can import data from multiple queries (and data sources) into a model, and each of these imports can be separately scheduled.

Data sources that support scheduling:

  • SAP Business Planning and Consolidation (BPC)
  • SAP Business Warehouse (BW)
  • SuccessFactors
  • A Salesforce (SFDC) user-predefined dataset or entity-based model
  • An SAP BusinessObjects BI platform universe (UNX) query
  • SAP ERP Central Component (SAP ECC)
  • SQL Databases
  • OData services
  • XLSX (Excel) and CSV (comma-separated values) files imported from a file server (not imported from your local machine)
  • Concur
  • Google Drive
  • SAP Integration Suite Open Connectors
Note
SAP Open Connectors uses cloud credits and may incur additional charges. You can review your Open Connectors cloud credit usage in your SAP Integration Suite overview page. For more information, see Use SAP Integration Suite Open Connectors.

Procedure

  1. Find an import job in one of these ways:
    • If your model is already open, switch to the Data Management screen. Data import jobs are listed under Import Jobs.
    • On the main menu, choose Connection, and then go to the Schedule Status tab. All import jobs associated with any data source appear here. Point to a row, and select Open Data Model or Open Dimension (for public dimensions).

      The Data Management screen appears. Data import jobs are listed under Import Jobs.

  2. If you want to run an Import Data job immediately, select (Refresh Now), or for BPC jobs, (Update Model Now).
    If the Import Data job was created when creating a model from a data source, the import method used is "Update". If the Import Data job was created when importing data into an existing model, the import method used is the same method that was used when the Import Data job was created. See “Import Data Options” in step 3 of this help topic for the available import methods.
    Note
    Please note that when refreshing a Salesforce (SFDC)-based model, the Update Model job's import setting “Clean and Replace” is used. This setting cleans up and re-creates the model, causing the hierarchies defined in the model to be lost when the data is refreshed.

    To receive an email notification when a refresh job fails, select Notify me of refresh failures by email.

  3. If you want to run an Import Data job using different data-handling settings, select a job and then select (Import Settings).
    1. Choose how you want existing data to be handled.
      Note
      These options affect both measures and dimensions. For related information, see Combine Data with Your Acquired Data.
      Import Data Options
      Update Updates the existing data and adds new entries to the target model. The scope of this update is based on a combination of all dimensions in the target model. For a more refined scope, use either the Clean and replace selected version data or Clean and replace subset of data update options.
      Append Keeps the existing data as is and adds new entries to the target model.
      Note
      When importing data into a planning model using the Append option, you can choose between aggregating duplicated rows, or rejecting them.
      Clean and replace selected version data Deletes the existing data and adds new entries to the target model, only for the versions that you specify in the import. You can choose to use either the existing version or specify a new version under Version. If you specify to import data for the "actual" version, only the data in the "actual" version is cleaned and replaced. Other versions, for example "planning", are not affected.
      Clean and replace subset of data

      Replaces existing data and adds new entries to the target model for a defined subset of the data based on a scope of selected versions using either the Existing Version or New Version buttons. You can also limit the scope to specific dimensions. To define a scope based on a combination of dimensions select + Add Scope and use the Select a dimension field to specify a dimension.

      When using this import method, the values of the incoming data for the dimension that is defined as the scope of replacement determines the portion of the model’s transaction data that will be cleaned up and then replaced by the incoming data. The dimension members of other dimensions that are not in the replacement scope are not considered.

      For example, when Date and Region dimensions are defined as part of a scope, transaction data entries in the model that match the specific dates and regions in the source data will be replaced. Existing data that does not match the scope will be kept as is.

    2. If you want to configure this Import Data job to delete the existing data and add new entries to the target model, switch on Reset Model.
      When Reset Model is on, all data in the model is cleaned before it can be replaced with the newest set. This cleans any existing data in all dimensions, excluding public dimensions.
      Note
      • Only the model owner can update the Reset Model setting.
      • If Reset Model is switched on, only the owner of the model can run that job or change its settings.
  4. For the following connection types, you can select (Edit Query) to make changes to the query, including parameter values.
    • SAP Business Warehouse (BW)
    • SuccessFactors HCM suite
    • An SAP BusinessObjects BI platform universe (UNX) query
    • SQL Databases
    Note
    Query filters can be modified, but columns cannot be added, removed, or changed. In the case of SQL databases, freehand SQL queries can't be edited.
  5. To schedule the execution of jobs, select (Schedule Settings), choose between the following options, and then select Save to save your scheduling settings:

    Frequency

    Description

    None

    Select this option when you want to update the data manually. (Use the (Refresh Now) icon in the Sources list.)

    Once

    The import is performed only once, at a preselected time.

    Repeating

    The import is executed according to a recurrence pattern. You can select a start and end date and time as well as a recurrence pattern.

    Start Date

    You set the date by when the task needs to be triggered at the first time.

    Time Zone

    You set the timezone to make sure that the task is triggered as you need.

    Start Time

    You define the time by when the task needs to start.

    Note
    The scheduled times are adjusted for Daylight Saving Time according to the time zone you select.
  6. If you want to run multiple Import Data jobs together, in a specified order, create a source grouping.
    A grouping can include jobs from public dimensions as well as the model. Running the grouping refreshes the public dimensions and model together.
    1. Select the jobs that you want to group together, and then select (Schedule Settings).
    2. Review the order of the jobs in the grouping, and change it if necessary.
      Alternatively, you can select the job that you want to run first, select (Schedule Settings), and then select Set a Dependency to add the other jobs to the grouping in the correct order.
    3. (Optional) Edit the source grouping name.
    4. Choose one of these group processing options:
      Stop if any query fails If any of the import jobs fails, the group processing stops. You can then cancel the remaining jobs, or try to fix the cause of the failure, and later resume execution of the grouping from the same point where execution stopped.
      Skip any failed query If any of the import jobs fails, the remaining jobs are still processed.
    5. Set the scheduling frequency as explained in step 5.
      Use the None option if you want to create and save a source grouping, but want to refresh it manually.

      Also use the None option if you have an existing schedule for a source grouping, and want to turn off the scheduled updates but not delete the grouping.

    6. (Optional) Specify the Model Refresh period for a public dimension.
      Specifying a value prevents a public dimension from being continually updated by various models that contain that dimension. For example, if you specify a value of 2 hours, the dimension can be refreshed a maximum of once every 2 hours.
    7. Save your grouping and schedule settings.
  7. If you chose the None option for scheduling, select (Refresh Now) to run the import job now.
    To receive an email notification when a refresh job fails, select Notify me of refresh failures by email.
  8. Because a public dimension can be used by multiple models, and be included in multiple groupings, you may want to see which models are scheduled to refresh a public dimension. Select a job, select (Schedule Settings), and then select View Models Using this Source.

  1. If you want to run an Update Model job immediately, select (Refresh Now), or for BPC jobs, (Update Model Now).
  2. If you want to run an Update Model job using different data handling settings, select a job and then select (Import Settings) to choose how you want existing data to be handled.
    Note
    These options affect both measures and dimensions. For related information, see Combine Data with Your Acquired Data.
    Update Options
    Clean and Replace Deletes the existing data and adds new entries to the target model.
    Append Keeps the existing data as is and adds new entries to the target model.

Results

After a scheduled job runs, the result is shown on the Schedule Status tab. If any errors or warnings occurred, select the job to see the job details in the Data Timeline panel.

If any rows in the dataset were rejected, you can select Download rejected rows to save the rejected rows as a .csv file. You can then examine this file to see which data was rejected. You could fix the data in the .csv file and then import the .csv file into the model using the Import Data workflow, or fix the data in the source system.

Note
If a connection was shared with you without sharing credentials, you'll need to enter your own credentials when you run a data refresh.