Blog Archives - Black Diamond | OneStream Solutions https://blackdiamondadvisory.com Tue, 12 Apr 2022 15:02:57 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.2 https://blackdiamondadvisory.com/wp-content/uploads/2020/09/cropped-diamonds-32x32.png Blog Archives - Black Diamond | OneStream Solutions https://blackdiamondadvisory.com 32 32 How Many Connector Rules Do We Really Need? https://blackdiamondadvisory.com/2022/04/12/connector-business-rules/ Tue, 12 Apr 2022 06:43:30 +0000 https://blackdiamondadvisory.com/?p=1474 Fellow OneStream enthusiast, if your list of OneStream Connector Business Rules looks like the below then this blog is for you. If it doesn’t, I hope you still find it useful or at least mildly interesting!

The post How Many Connector Rules Do We Really Need? appeared first on Black Diamond | OneStream Solutions.

]]>

How Many Connector Rules Do We Really Need?

Not as many as you may think (or were taught)

Fellow OneStream enthusiast, if your list of OneStream Connector Business Rules looks like the below then this blog is for you. If it doesn’t, I hope you still find it useful or at least mildly interesting!

Graphical user interface, application Description automatically generated

Figure :Why, oh why, did I not see the light earlier?

Now I must confess, many of my early solutions looked like this: it is the common and generally taught method and it feels like the natural way to set Connector Business Rules up due to the way we link a DataSource to a Connector Rule.

Graphical user interface, application, table Description automatically generated

Figure : The old way, one rule per DataSource

The Epiphany

As you may be aware I am an inherently lazy developer have a strong preference to build reusable functions wherever possible. It was as I was creating a myriad of API connector rules for a client that I realized the error of my ways. Most of the logic in these rules is the same, declare a source database connection, call that connection, get the data and return it to the stage.

Why, I asked myself, if I have a standard OAuth2 security handling function in an extender rule to save me the hassle of rewriting facilitate standardization am I copying pasting and tweaking the same rule over and over?

As you read through this post, you will (hopefully) come to understand why a lazy developer is a good developer: writing and testing core functionality and then reusing it means said developer produces quality solutions, does not go bonkers with boredom writing nearly-identical code ad infinitum, and is now free to go onto bigger and better things. This practice actually is not laziness, it is competence. And laziness.

One Rule to Rule Them All

How do we do it?

Having had my epiphany moment I set about deleting all the connector rules I had created to that point (to avoid the temptation to just continue with what I had created and force me to deliver something new) and set about creating one to rule them all. For this blog I will be using REST APIs as the example which returns JSON data, however the same methodology is applicable to any source type, be it API, SQL, SAP, FTP, and so on.

But how does the rule know what to connect to and what the data looks like?

There were two key items in creating a single rule and making it work that I considered first; how does the rule know which DataSource is initiating it and how do we flex the fields returned.

Getting the DataSource Name

This is easy enough, when a DataSource in OneStream (or (almost) anything) calls a business rule the action knows what told it to do something. In a connector rule this is in the args object.

Figure : But where do we get the DataSourceName, oh, there it is

The name of the DataSource for the import process is captured in args.DataSourceName. Problem one solved!

Structuring the data, data field types

My preferred method here is to use custom classes created within the connector business rule.

Graphical user interface, application Description automatically generated

Figure : Give a class a home

The Example

For brevity we will show two API calls, but this methodology will work with as many as you like. Additionally, because we are only looking at two examples, I am going to keep the logic simple. As the volume of sources increases, I would advise additional optimization of the code. For example, one rule to hold the logic, another to hold the classes and a config file to hold the connection details.

I love a class

Without further ado let’s go look at one of the Trial Balance classes.

Graphical user interface, text, application Description automatically generated

Figure : Each DataSource needs two classes

First off you may notice I have lied to you (I will try not to do this again), we create two classes here, not the one I led you to believe.

  1. The first is a simple flat object used to provide the field list to the DataSource for mapping fields to dimensions, additionally used to create a datatable to hold the flattened JSON string returned by the API.
  2. The second represents the nested JSON structure returned by the API, this is how our friend Newtonsoft.Json knows how to deserialize the nested JSON string. I may have forgotten to mention we need additional References to make all this work.

Text Description automatically generated

Figure : The additional References we need for this connector methodology

The key ones I will explain here are Newtonsoft.Json (explained above), Microsoft.Identity.Client (required if you have an OAuth2 protected API) and the last two which are the beginnings of my epiphany moment. This is a Business Rule in OneStream in the Extender section that I use to manage all OAuth2 security token calls.

Getting the fields

To get the field list for the DataSource we just need to utilize the class we created and return these to the Main function. Within this you can see that we have our first use of args.DataSourceName which is used to return the correct class object for the DataSource. Then we just use a For-Each loop to loop through the class, add the field names to a List and return this:

Text Description automatically generated

Figure : Getting the fields from our classes

Et voilà, we have field names in our DataSource to map to our Source Dimensions.

Graphical user interface, application Description automatically generated

Figure : We have fields to map!

The URL

Often a URL will have additional components required to identify or filter the records we require. Below is an example of how this could look.

https://apicalls.mydomain.com/SendTBFromERP1ToOneStream?company=myCompanyID&postingEndDate=myPeriodEndDate

Where the items in blue may vary, not exist, or have additional elements. We vary these by a simple use of dictionaries and passing in the items we need for the API to accept our call.

Text Description automatically generated

Figure : Handling varying URL components

This will generate a URL string like the below:

  1. https://apicalls.mydomain.com/SendTBFromERP1ToOneStream?Company=BlackDiamondAdvisory&PostingEndDate=20221231
  2. https://apicalls.mydomain.com/SendTBFromERP2ToOneStream?Company=BlackDiamondAdvisoryUKLtd&StartDate=20220101&EndDate=20221231&Fixed=True

The Data

Finally, the data, which is why we’re all here I assume.

For each DataSource we use the relevant Class Object to manage the deconstructing of the nested JSON data into a flat data string. Following this all we need to do is to create a datatable to put the data in and then return it to the stage.

Text Description automatically generated with medium confidence

Figure : Get me some data!

The Conclusion

As always with OneStream there are at least five ways to do anything. The above is just one illustration of simplifying Connector Rules.

A key factor in making the above methodology a success is to start by identifying the source databases, URL endpoints and/or fileservers needed for data. Please do take into consideration that there may be additional sources required after you and your initial build is in Production! Categorize these into sources that make sense to group together and go from there. The categorization is completely up to you. My recommendation is to think along the lines of source type, for example, REST API v SQL, etc. and/or by Source End Point. For example, if you have 10 imports from multiple systems (TB, FA, AP, AR, etc.) maybe it is the right mindset to create a rule for each system.

One thing is for sure, save yourself time and stop creating a rule for each DataSource!

The post How Many Connector Rules Do We Really Need? appeared first on Black Diamond | OneStream Solutions.

]]>
No Calc To Calc https://blackdiamondadvisory.com/2022/03/29/no-calc-to-calc/ Tue, 29 Mar 2022 19:24:48 +0000 https://blackdiamondadvisory.com/?p=1459 Have you ever wondered what you can do with the calculation type “No Calculate” when setting up a Calculation Definition in your Workflow Profile? It’s sort of odd: Calculation Definition implies a calculation; No Calculate does precisely the opposite. “No Calculate”, despite its name, is a powerful technique that can kick off a Data Management Sequence (to, um, calculate) from a Workflow Profile Process step, which opens the world to endless types of different scenarios (or at least calculations running after data loads).

The post No Calc To Calc appeared first on Black Diamond | OneStream Solutions.

]]>

The Gateway to DM Sequences via Workflow Profiles – “No Calculate”

Have you ever wondered what you can do with the calculation type “No Calculate” when setting up a Calculation Definition in your Workflow Profile? It’s sort of odd: Calculation Definition implies a calculation; No Calculate does precisely the opposite. “No Calculate”, despite its name, is a powerful technique that can kick off a Data Management Sequence (to, um, calculate) from a Workflow Profile Process step, which opens the world to endless types of different scenarios (or at least calculations running after data loads).

Say What Meme GIFs | Tenor

Use Case

In this example, we will be loading new growth factors and executing a business rule to calculate forecasted dollar amounts. The key here is that the calculation ought to fire directly after the data load.

Pre-Req’s

1. A Workflow Profile Child that at a minimum has a “Process” step in it. In my example, we will use the “Import, Validate, Process” Workflow name.

Graphical user interface, application Description automatically generated

2. You will need to have a Data Management Sequence setup and working (and of course one that calls a Business Rule). In this example, I have created a DM Sequence and Step called “NoCalc_Demo”. Spaces are allowed if you wish!

Graphical user interface, text, application, email Description automatically generated

3. You’ll need to have an Entity to run this job from. I have created an entity called “NoCalc” to launch this rule. While you can use this entity in the DM Step, this is not always the entity that will be calculated in the DM Step. A separate, calculation-only Entity is a good idea because it segregates calculations that run on load from the real consolidation process.

Graphical user interface, application, Word Description automatically generated

4. Lastly, you will need a Data Quality Event Handler. If you do not already have this Business Rule created, navigate to Application->Tools->Business Rules and create a new rule with a type of “Data Quality Event Handler”. The name of the rule – “DataQualityEventHandler” – is assigned automatically on creation of the rule and cannot be changed. You will only ever have one of this rule type.

Graphical user interface, text, application Description automatically generated

Now you will need to place some code into the business rule which will actively identify a “No Calculate” calculation type. Two key areas in the business rule that I would like to call out are on lines 45 and 67 below. The first is identifying a No Calculate process type that is actively identifying the process in the background (automagically) and the second part where it is starting the Data management Sequence with a filter value (this will come into play in a later step).

BDA nor I will take any claim in writing this rule as it was passed down from the Legend himself. No changes are necessary to this business rule.

Text Description automatically generated

Where To Find It If You’re Not Keen on Lots of Typing

The above is a bit hard to read and isn’t populated when the rule is created. It is available in GolfStream and is a direct copy and paste (be sure to do a validate just in case). There’s only one DataQualityEventHandler rule so it’ll not be hard to find; look for it under the Extensibility Rules hierarchy.

Connect the Dots

At this point, we have all the necessary components to kick off the DM Sequence from our WF Profile. The linkage between the Workflow Profile Child and Data Management can now be done in just a few easy steps:

1. Within the WF Profile Calculation Definition setup you will need to add an entry with the following details:

    • Entity – NoCalc (Step 3 from the Pre-Reqs)
    • Parent – You can leave this at Unassigned
    • Calc Type – No Calculate – hence the reason you are reading this blog! Seriously, this is not terribly intuitive, but this really and truly is the Calc Type.
    • Scenario Filter Type – If you would like to only run for certain Scenario Types you can select a value.
    • Confirmed – Checkbox Y/N value. In this example we will leave it blank.
    • Order – Defined order of operations that will be unique to your use case. In this case I only have one step so I will leave it at 0.
    • Filter Value – Must equal the Data Management Sequence name. Check line 67 of Tom’s code to see that this value is what is being started in the DM Sequence. You must type in the name of the Sequence name directly so either have a good memory or write it down on a piece of paper; there is no dropdown control to give you the names of the Sequences.

Graphical user interface, text, application Description automatically generated

2. You are done!

Final Product

Once we have completed the setup, we can go ahead and step through the process and test it out. So here we go…

1. Imported data successfully:

Graphical user interface, text, application Description automatically generated

2. Validated data successfully:

Graphical user interface, text, application Description automatically generated

3. Load and Process the cube successfully:

Graphical user interface, text, application Description automatically generated

4. Data Management Sequence executed successfully, as shown in the Task Activity monitor:

We can also review our cube views to make sure the numbers are calculated as anticipated post process. In the example for this demo purpose – we have the original data plus a growth factor and then a second slice that will contain the calculated data below it.

Pre Process

Application, table Description automatically generated

Post Process

Graphical user interface, application, Word Description automatically generated

Automation Nirvana Achieved

The requirement to run a calculation (or a series of calculations) after a data load is a common use case. That calculation must be run manually unless the No Calc-to-Calculate Workflow Process properties are defined and the (written for you on creation) DataQualityEventHandler rule is created.

With those two components in hand, with a simple and reusable Business Rule you can unleash the power of OneStream’s Workflow engine to create simple and integrated processes! What could possibly be easier?

The post No Calc To Calc appeared first on Black Diamond | OneStream Solutions.

]]>
Keeping Your Balance With Unbalanced Math https://blackdiamondadvisory.com/2022/03/15/keeping-your-balance-with-unbalanced-math/ Tue, 15 Mar 2022 17:06:00 +0000 https://blackdiamondadvisory.com/?p=1446 Gentle Reader, if you – as your author most definitely was when he first researched it – are somewhat puzzled when reading about Unbalanced Math, this is post is for you as it is a powerful method that illustrates OneStream’s flexibility and utility, prevents pretty dramatic error messages, and stops you from writing stupendously ill thought out nonoptimal solutions to simple problems. Seriously, if you write Finance Business Rules, you need to understand this.

The post Keeping Your Balance With Unbalanced Math appeared first on Black Diamond | OneStream Solutions.

]]>
The Riddle of the Docs

Gentle Reader, if you – as your author most definitely was when he first researched it – are somewhat puzzled when reading about Unbalanced Math, this is post is for you as it is a powerful method that illustrates OneStream’s flexibility and utility, prevents pretty dramatic error messages, and stops you from writing stupendously ill thought out nonoptimal solutions to simple problems. Seriously, if you write Finance Business Rules, you need to understand this.

Oh sure, I know all of this now, but as I first read through the documentation, I simply couldn’t figure it out. Why is it broken out separately from plain old api.Data.Calculate? “Unbalanced” is such an odd name. How do I use it? In short: what on earth are they going on about in the in the Design and Reference Guide and why should I care?

I’m sad to relate that as I read (and re-read) on, I became more confused. In particular, the references to Data Buffers just seemed…odd. I had to – as with practically everything else that involves OneStream and Yr. Obt. Svt.­ – figure this out by actually using it in a concrete functional use case of my own.

TL;DR, but you should

“Unbalanced” simply means that when a target member tuple in an api.Data.Calculate statement does not mirror the dimensions in the source member tuples, it is unbalanced in its dimensionality and when the tuples – which translate to Data Buffers – are unbalanced, a standard calculation will fail.

In Itsy Bitsy Words

A simple example is a centrally stored rate that is applied to multiple Entities and UD members. In pseudocode, it would look like:

Distribution = Sales * Distribution_Rate at No Geography at No Product

This looks simple enough. Sales is in multiple States (sorry, international readers) and Products. There is a single rate for the entire country by month. Multiplying Sales by that centrally stored expense rate for all States and Products should be a trifle.

Balanced Tuple Dimensions

A#Sales could be at O#Import and O#Forms, so O#BeforeAdj will be the Origin dimension member. These rate calculations only make sense at V#Periodic so that too will be explicitly set in the method. All calculations go to O#Forms. So far, so good.

First Pass, But Fails

That formula might look something like this:

Note that the U1#Total_Products.Base member filter will apply A#Distribution’s calculated results to every Product where A#Sales exists.

Also note that U1#No_Product is in A#Distribution_Rate’s tuple but not in the target A#Distribution’s tuple; this is no accident as the calculation must write to many Products using a rate stored at a single calculation-only driver Product.

Data Management

The super simple Data Management job cycles through just two States – enough for illustrative purposes and no more.

Graphical user interface, text Description automatically generated

The data is equally simple as is the Excel proof of concept math:

Graphical user interface, application Description automatically generated

Seriously, this is x = y * z. How hard could this be?

Harder Than Anticipated

A picture containing text Description automatically generated

No, OneStream doesn’t throw an error quite like that, but it’s almost as bad:

Text Description automatically generated

Ouch.

Despite the error message’s length, OneStream is pretty clear about the issue: A#Distribution_Rate has an explicit U1#No_Product member definition and A#Distribution does not; the member filter of U1#Total_Products.Base does not balance U1.

The error message states that either a specific target U1 member should be used or U1#All could apply the calculated results to, well, all U1 members.

Please Do Not Do This

U1#All will work in this very specific context:

Ta da:

A picture containing table Description automatically generated

Don’t, Just Don’t

There are warnings in the Design and Reference Guide to be very, very, very careful when using the All keyword because it can lead to data – perhaps quite a lot of data – being in places neither you nor anyone else might expect which is generally viewed as a Bad Thing. OneStream are not shy about pointing this out:

Graphical user interface, text, application Description automatically generated

The author in your, um, author appreciates the “please do not do this” note which he suspects came from Product Support or Product Development or practically everyone who works for OneStream.

Having shown you the wrong way to do this, let’s try the right way: Unbalanced Math.

The Four Faces of Unbalanced Math

There are four unbalanced functions: AddUnbalanced, SubtractUnbalanced, DivideUnbalanced, and MultiplyUnbalanced, the last of which is the focus of this blog post. See the Design and Reference Guide for more detail on the first three.

I think of the functions as following (super roughly) this pattern:

x = y, z that is out of balance with x, the unbalanced bits of z that aren’t mentioned in x

In this use case, the x, y, and z as well as the missing bits must be surrounded by MultiplyUnbalanced and of course the whole thing is encapsulated within a api.Data.Calculate statement.

In All Its Glory

What does it take? Is it as complicated as I first thought?

Repeating U1#No_Product in that third parameter is all it takes. Easy-peasy.

NB — E#No_Geography isn’t out of balance because E#Pennsylvania and E#South_Carolina as defined in the Data Management Step are implicitly in the target Data Unit tuple.

I’ve modified A#Distribution_Rate to illustrate the impact:

Graphical user interface, table Description automatically generated with medium confidence

That’s all there is to it. Also, this prevents OneStream’s documentation team (and the rest of that company) from a deep existential despair that ensues when #All is used. Win, win.

As Always, Easier Once Done

OneStream’s functionality can sometimes be difficult to suss out but with a bit of experimentation, it will give up its secrets. The reward is usually worth the struggle.

Rate calculations are common across all OneStream application. If you have not yet run across a requirement to perform Unbalanced Math, you will. It’s easy and powerful. Use it, and don’t use #All.

Be seeing you.

 

The post Keeping Your Balance With Unbalanced Math appeared first on Black Diamond | OneStream Solutions.

]]>
How Does OneStream Store Data? https://blackdiamondadvisory.com/2022/03/01/how-does-onestream-store-data/ Tue, 01 Mar 2022 18:50:08 +0000 https://blackdiamondadvisory.com/?p=1421 A Financial Analyst Perspective Working with OneStream software for the first time, coming from a financial analyst background, I was always a bit confused by how data was stored for the reports I was creating or the hacked-together solutions I was deploying through OneStream reporting objects. I was familiar with data warehousing concepts and had...

The post How Does OneStream Store Data? appeared first on Black Diamond | OneStream Solutions.

]]>
A Financial Analyst Perspective

Working with OneStream software for the first time, coming from a financial analyst background, I was always a bit confused by how data was stored for the reports I was creating or the hacked-together solutions I was deploying through OneStream reporting objects. I was familiar with data warehousing concepts and had worked almost entirely within a Kimball-designed data warehouse, creating Power BI reports, throwing data into Excel sheets, crunching data in Python, but when I switched over to OneStream’s cube-like data solutions, it didn’t really quite click as to how to manipulate data within OneStream until I saw the SQL tables under the hood.

It’s probably also worth noting that when you start out as a financial analyst, you usually start to understand the OneStream application backwards because usually the entire system is already built by the time they even let you in to mess around with things like Cube Views or Report Books. So hopefully, this perspective is at least somewhat relatable to those in similar roles.

Essentially, data within OneStream – except for dynamically aggregated results – in its purest form is stored in data records in MS SQL Server that use a “DataRecordYYYY” naming convention, where YYYY is the corresponding year that the data is recorded in. Each row in this table corresponds to all 18 dimensions as columns, along with additional columns for M1, M2, … , M12, coupled with values and statuses.

Something like below:

The extra column PartitionId is what gets used by OneStream’s in-memory engines to split up processing by EntityId

Disclaimer: I am assuming you know how api.Data.Calculate() works as well as how to write Member Filters

So with that in mind, let’s look at an example of when you run an api.Data.Calculate() function within a business rule or stored formula:

api.Data.Calculate("A#SomeAccount = A#SomeOtherAccount")

When you run this api.Data.Calculate() function above through a business rule or stored value formula, OneStream does some finaggling in the background (using functions that manipulate the data in-memory) that looks at all existing instances of A#SomeOtherAccount and sets A#SomeAccount to A#SomeOtherAccount’s values for the data unit that you’re running the function for (where a Data Unit is defined as Cube, Entity, Parent, Consolidation, Scenario, Time).

However, a data unit only covers some of the columns in the SQL table – the full listing of dimensions are the following:

  • Cube
  • Entity
  • Parent
  • Consolidation
  • Scenario
  • Time
  • View
  • Account
  • Flow
  • Origin
  • Intercompany
  • UD1 – UD8

For the above api.Data.Calculate() example, were it run through a Custom Calculate Data Management step to run a Finance Business Rule (with only the Data Unit defined and POV left blank), then the function will go out and look for every single row in the appropriate DataRecordYYYY table (again, for the relevant data unit) and do a calculation for A#SomeAccount using A#SomeOtherAccount’s values for all the other dimension combinations where data exists.

The important bit here, is it will only grab the dimension combinations where data exists – so for every other dimension combination that has no data, OneStream’s finance engine will not go through the entire database and copy over zeroes if the data does not exist. To riff on this a bit more – api.Data.Calculate() can only see rows within the relevant “DataRecordYYYY” table when doing comparisons. This limits data size in the tables, improves performance by observing sparsity and results in meaningful results

As a financial analyst, if you were tasked to create some calculation and fill data for some intersection, it is extremely important that you understand what base members you’re doing the calculation for. Because if you left everything wide open like above and you don’t really have a fundamental understanding of how the data is structured, you could end up creating a ton of data accidentally. In the case above, the finance engine will quite literally grab every single data point related to A#SomeOtherAccount for the relevant data unit and create the necessary intersections for A#SomeAccount within the SQL table (which may or may not be what you wanted to happen; but obviously, if it’s something that you’re trying to do intentionally, then go for it).

To harp on about data existing, I mean that the specific dimension combination would need to have at least one value populated for the year.

So, to continue with the api.Data.Calculate() example – let’s say I wanted to copy all the values in A#GrossSales to A#Salary shown in this cube view below (for some weird reason):

The second row level in this cube view is a UD1 that has several products and only serves a purpose in throwing some data in a different dimension combination specific to this example:

Now, If I ran this code within a finance business rule (for the relevant data unit within this cube view):

api.Data.Calculate("A#Salary = A#GrossSales")

This is what happens:

For those values to copy over to A#Salary:U1#None, data must have existed in A#GrossSales:U1#None and if they didn’t then nothing would get copied over. Immediately upon copying, a corresponding row in the relevant SQL table shows up with all dimensions specified as well as values populated.

If I look at the cell POV for the T#2021M1 entry for A#Salary:U1#None in this cube view:

These exact dimensions would be (must be) populated in the SQL table (as MemberIds, not names) along with every M1-M12 value with a corresponding status. You can check yourself in System -> Database if you tried this on your own instance.

Moreover, suppose I wanted to copy some data from A#GrossSales:U1#Product1 to A#Salary:U1#Product1 where data was sparsely populated like below:

In this case, the time periods that show no data will be thrown into the database as zeroes with a cell status of ‘No Data’ for January to April. From June to December, zeroes will also be put in its place, but instead with a status of ‘Calculated/Derived’.

Since those values are tagged with a ‘No Data’ status, they will show up as blanks in the cube view, for the other months that are in the database as zeroes, but with a status of ‘Calculated/Derived’, they show up as grey:

This is a fairly long example using api.Data.Calculate(), but this entire process of OneStream throwing data into respective DataRecordYYYY tables isn’t exclusive to just this function – it’s what happens for everything and anything that is involved with storing cube data in OneStream (not to be confused with stage data, those have their own tables). Another example would be through the input of data using the Forms Origin from a Cube View specifically setup for data entry – the same kind of thing happens.

Moreover, just because this data record table exists as a SQL object doesn’t mean you can just update M1-M12 values and statuses with an update SQL statement. OneStream deals with consolidations (and really any calculation in the application) using its own calculation engine by referencing the data record tables and there’s more to it than just updating the data record table. It’s also advisable to never do this to any of the formally created OneStream SQL tables unless you absolutely know what it will do within the system, just experimenting on data like this will most likely corrupt your application. Feel free to try and break things on your own experimental application though; in fact, I actively encourage you to do so because some of this stuff just isn’t documented as well as it should be.

Warnings aside, understanding how OneStream stores its data (even on the surface like this) definitely gave me a better mental model of what was going on in the background whenever I imported data, saved it through a form, or calculated it through a business rule. I’m the type of person that needs to understand structures at their most granular level when designing solutions, so visually seeing how the data sat in the SQL database not only made it easier to design processes through business rules, but it also gave me a grounding of how I should be thinking about OneStream’s data in general (so as to combat situations where I have no idea why data isn’t showing up when I plop Member Filters into Cube Views or why data that I calculated isn’t as I expect).

There’s way more to talk about when it comes to OneStream’s data, but this should be a good starting point for those just getting into it.

 

The post How Does OneStream Store Data? appeared first on Black Diamond | OneStream Solutions.

]]>
What’s in a (Workflow) Name https://blackdiamondadvisory.com/2022/02/09/whats-in-a-workflow-name/ Wed, 09 Feb 2022 17:01:01 +0000 https://blackdiamondadvisory.com/?p=1364 It has been your author’s observation that the glue that holds OneStream applications together – Workflow – is a victim of terminological inexactitude. No, not that odious euphemism, but instead the common usage of just one term – “Workflow Profile” – for the four (arguably five) different Workflow Profile types. When we OneStream practitioners use the same word to mean many things, we confuse ourselves, make mistakes, and generally make everyone who touches the application unhappy. Happy is more fun.

The post What’s in a (Workflow) Name appeared first on Black Diamond | OneStream Solutions.

]]>

Many names, much confusion, and it’s all rather important

It has been your author’s observation that the glue that holds OneStream applications together – Workflow – is a victim of terminological inexactitude. No, not that odious euphemism, but instead the common usage of just one term – “Workflow Profile” – for the four (arguably five) different Workflow Profile types. When we OneStream practitioners use the same word to mean many things, we confuse ourselves, make mistakes, and generally make everyone who touches the application unhappy. Happy is more fun.

To avoid that state of Workflow-induced despair, we need a commonly agreed upon taxonomy and then we must use it. Happily, OneStream has created those Workflow types and definitions (they could not do otherwise) so the path to understanding then is to get all OneStream practitioners to comprehend and adhere to those taxonomical definitions. Working with a tool as sophisticated as Workflow requires terminological exactitude so that we all understand what on earth we’re talking about.

Four, just four

As noted, there are four main Workflow types: Cube Root Workflow Profile, Default Workflow Profile, Workflow Profile, and Workflow Child Profile.

How hard could it possibly be? Let’s find out.

A note

This post is not a comprehensive guide to Workflow. For that, see the OneStream Design and Reference Guide and its Workflow Guides section.

Cube Root Workflow Profile

The Cube Root Workflow Profile is defined at the Cube Level by Scenario Type via Scenario Suffixes. Think of using Scenario Suffixes at the Cube level as a sort of extended (OneStream uses the term “varying”) Workflow as it allows your application to segregate Workflow by Scenario Type.

Graphical user interface, table Description automatically generated

In the above example, the Scenario Type Suffixes are Actual and Forecast. The values are arbitrary – they could be Potato or Happy or more likely the ones shown or Budget or LRP. Try to use something meaningful.

Missing Scenario Types

Scenario Types are good practice because they allow explicit assignments of an Entity to more than one Workflow Profile (more anon on this term). Even if there is no immediate need for them, applications have a way of growing and it’s best to have Scenario Types in place when they do. There’s no need to assign Suffixes to each Scenario Type, just one will do as a start and then expand as many times as needed.

Naming confusion

A note about Scenario Types – don’t conflate a Scenario named “Plan” with a Scenario Type of “Plan” as the only logical and functional link one you, Gentle Reader, should define in the tool.  Although a “Plan” Scenario Type can certainly have a Suffix of “Plan” and be used in the “Plan” Scenario, it isn’t required.  This sort of identical naming convention, while appealing on its face, breaks down if there is more than one Scenario that logically shares a Scenario Type which is often the case in planning applications.  Whew.

As with everything OneStream, there are many ways to approach a requirement, none of which are exactly wrong but some of which are not quite as good as others. Your application’s needs will dictate what is best.

Using the example below of three Scenario Types (Actual, Forecast, and Plan) with two different Scenario Suffixes (Actual and Forecast), when a new Cube Root Profile is created, the two Scenario Types of Actual and Forecast appear; the Workflow Scenario Type Suffix defines the Workflow Cube Root Profiles, not the Scenario Types themselves.

Creating a Cube Root Workflow Profile

The naming convention is CubeName_ScenarioType.

Graphical user interface, application Description automatically generated

Clicking on either choice will create the Cube Root Workflow Profile Name. For the purposes of this post, only Sample_Forecast will be used.

It’s easy to identify in the Workflow Profile editor hierarchy as it’s at the very tippy top and has a cube icon to the left of the name:

Default Workflow Profile

Once the Cube Root Workflow Profile is created, the Default Workflow Profile Sample_Forecast_Default appears automatically.

Graphical user interface Description automatically generated with low confidence

A Default Workflow Profile connects the Cube’s Entities and Workflow itself. All Entities are by default assigned to the Workflow – note that the Entity Assignment property sheet does not exist in the Default Workflow Profile.

Some of its salient characteristics are:

  1. It is named CubeName_WorkflowSuffix_Default.
  2. It joins Cube Entities to Workflow.
  3. It cannot be deleted.
  4. Only Administrators should be able to see it.

As with all Workflow Profiles, the Workflow Child Profiles of Import, Forms, and Adj appear below the Workflow Profile name.

A click on the Import Child Profile (this is just illustrative – don’t actually use the Default Workflow Profile) shows that two Scenario Types are available: Forecast and Plan. These are the Scenario Types that share the Workflow Suffix “Forecast”.

Graphical user interface, application Description automatically generated

Scenarios with Scenario Types

The Plan Scenario has a Scenario Type of “Forecast”. (Remember what I said about the potential for confusion? Here it is.)

Scenario Types are linked to the Scenario Type property in the Scenario itself. This relationship defines the Scenarios in OnePlace. Whew, again.

Graphical user interface, application Description automatically generated

All you really have to know is that if a Cube’s Workflow has defined Scenario Types, the Workflow is extended and a Scenario tagged with that Scenario Type is now part of that Workflow; only Scenarios with that Scenario Type will appear in OnePlace.

Graphical user interface, text, application, email Description automatically generated

Whew, again and again.

Workflow Profile

Workflow Profile Types

There are three Workflow Profile Types: Review, Base Input, and Parent Input.

Graphical user interface, text, application, email Description automatically generated

As this post is written by Mr. Planning, I’ll confidently state that Base Input is the overwhelmingly most used type in planning applications although of course Review and Parent Input are used as well; Consolidations applications are far more likely to use all three types As the purview of this post is not All Things Workflow but instead Workflow terminology, only Base Input will be examined.

Workflow Child Profile

We have now almost reached the end of our Workflow taxonomical journey.

Note that by default, the Workflow Child Profiles of Import, Forms, and Adj have been automatically created.

Graphical user interface, text, application Description automatically generated

Workflow Child Profile types are tied to the Origin dimension, with a fairly logical grouping of Import with Import, Forms with Forms, and Adj with AdjInput.

Graphical user interface, text, application, email Description automatically generated

That’s it – we are at the bottom of the Worfklow Profile tree with Workflow Profile as the most atomic.

There is one more we-call-it-Workflow-Profile-but-really-it’s-something-else element: Workflow Names.

Workflow Names

Workflow Names are confusingly called “Default Workflow” when a Workflow Child Profile is created:

Graphical user interface, application Description automatically generated

They are called Workflow Names within a Workflow Child Profile:

Graphical user interface, text, application, email Description automatically generated

Think of Workflow Names as the actions that drive Workflow. Given that the property sheet for a Workflow Child Profile uses “Workflow Names”, it seems most logical to use that term when referring to the many, many, many actions (almost 60) they support. Whew, one last time.

As an example, in the Workflow Child Profile Import, I can use the traditional Import, Validate, Load Workflow Name to load data:

Graphical user interface, application, Word Description automatically generated

Or I can use Direct and change the way data is loaded into the Cube:

Graphical user interface, application Description automatically generated

Do we have unanimity? Close to it? We should.

Workflow is the core structure OneStream application data processing. Workflow is sophisticated and powerful. Its potential is great, as is its potential to go sideways if discussed and thought about incorrectly.

To use it correctly, we must mean what we say by using the right terms in the right place.

Thus:

  1. Cube Root Workflow Profiles are the topmost level of the Workflow hierarchy. They are tied to Workflow Types. Scenarios that have a matching Scenario Type are visible in OnePlace.
  2. The Default Workflow Profile is automatically generated when a Cube Root Workflow Profile is created. It bridges Cube Entities and Workflow. Do not use it.
  3. Below the main Cube Root Workflow Profile parent, Workflow Profiles join data, metadata, and users.
  4. Workflow Child Profiles are where users interact with Workflow be it data loads, forms, or adjustments via Workflow Name action types.

That’s it.

Be seeing you.

The post What’s in a (Workflow) Name appeared first on Black Diamond | OneStream Solutions.

]]>
New Hire and KC City Office Launch https://blackdiamondadvisory.com/2020/10/08/new-hire-and-kc-city-office-launch/ Thu, 08 Oct 2020 02:49:14 +0000 https://blackdiamondadvisory.com/?p=421 Black Diamond Advisory Announces Chad Hatcher to join leadership team and launches Kansas City Office   Global firm builds executive team and emerges as the next leader in digital finance transformation    KANSAS CITY, MO August 4, 2020   (GLOBE NEWSWIRE) – Black Diamond Advisory was built to transform the office of finance by creating an industry powerhouse of...

The post New Hire and KC City Office Launch appeared first on Black Diamond | OneStream Solutions.

]]>
Black Diamond Advisory Announces Chad Hatcher to join leadership team and launches Kansas City Office  

Global firm builds executive team and emerges as the next leader in digital finance transformation   

KANSAS CITY, MO August 4, 2020   (GLOBE NEWSWIRE) – Black Diamond Advisory was built to transform the office of finance by creating an industry powerhouse of top talent from the most respected leaders in OneStream technology together with consulting leaders in digital finance transformation. We are proud to announce the addition of Chad Hatcher to our Leadership Team.  Chad comes to Black Diamond from AMC Theatre where he was the Manager of Financial Systems and Enterprise Architect of their global corporate performance management solution.

“Chad’s unique combination of technology consulting and industry leadership experience will bring a great perspective and balance to our team and client delivery capability,” says Randy Werder, President, Black Diamond Advisory.  Black Diamond is founded on the principle of “Experts Only” by combining the talents of many different backgrounds to provide the most diverse and innovative culture and capability in our industry.  Black Diamond includes former partners and founders from Arthur Andersen, MarketSphere, Grant Thornton, Ranzal, and Fortune 100 financial executives who have come together to create a firm that combines transformative business capabilities with execution excellence, helping clients rethink the possible and reimplement the future with a focus on innovation, usability, and maximizing value from the broad OneStream platform.

Former MarketSphere Co-Founder and current Black Diamond Chairman & CEO, Carl Yost, is excited about establishing a formal presence in Kansas City.  “Kansas City is a key market for Black Diamond and OneStream.  Bringing Black Diamond to my hometown has been a goal of mine since the firm was established.  I am excited about Chad’s leadership and look forward to continuing all of the incredible relationships that were established at Arthur Andersen and MarketSphere, ” said Yost.

PRESS CONTACT  
Randy Werder
President
T: (407)758-7382 
E: rwerder@blackdiamondadvisory.com 

About Black Diamond Advisory  
Black Diamond is a leading global digital finance transformation firm and a Platinum Partner of OneStream Software.

About OneStream Software 
OneStream Software provides a market-leading CPM solution that unifies and simplifies financial consolidation, planning, reporting, analytics and financial data quality for sophisticated organizations. Deployed via the cloud or on-premise, OneStream’s unified platform enables organizations to modernize Finance, replace multiple legacy applications, and reduce the total cost of ownership of financial systems. OneStream unleashes Finance teams to spend less time on data integration and system maintenance – and more time focusing on driving business performance.

The OneStream XF MarketPlace features over 50 downloadable solutions that allow customers to easily extend the value of their CPM platform to meet the changing needs of Finance and Operations.

The post New Hire and KC City Office Launch appeared first on Black Diamond | OneStream Solutions.

]]>
West Coast Mixology Event ( City by the Bay) https://blackdiamondadvisory.com/2020/10/06/west-coast-mixology-event-city-by-the-bay/ Tue, 06 Oct 2020 02:32:13 +0000 https://blackdiamondadvisory.com/?p=406 Dear Finance Executive,    Join us on October 15, 2020 at 5:00 PM Pacific for a complimentary virtual mixology event powered by and Black Diamond Advisory, a Platinum OneStream Partner. Our guests will craft two cocktails with an award-winning mixologist. Learn about flavor balance and mixology skills with instructions, tips and stories along the way. We provide...

The post West Coast Mixology Event ( City by the Bay) appeared first on Black Diamond | OneStream Solutions.

]]>
Dear Finance Executive,  

Join us on October 15, 2020 at 5:00 PM Pacific for a complimentary virtual mixology event powered by and Black Diamond Advisory, a Platinum OneStream Partner. Our guests will craft two cocktails with an award-winning mixologist. Learn about flavor balance and mixology skills with instructions, tips and stories along the way. We provide the ingredients!  

Ahead of the event, Avital will send you all the necessary ingredients to create cocktails. During this exclusive event, you will be guided through the mixology, network with fellow finance and accounting professionals, and have some fun! 

Please follow this Registration Link to enter your information directly with Avital to receive your ingredients or reach out to Sherri Schaffroth (sschaffroth@blackdiamondadvisory.com).  

We’ll hear from Jason Karber, CPM Expert and former Koch Industries OneStream customer along with OneStream Sales VP, Mark Reed. Jason will tell a quick customer success story and Mark will share a brief overview of how OneStream has achieved 100% success in delivering world-class corporate performance management (CPM) solutions.  

This event is by invitation only and space is limited. 

To receive your ingredients, please complete the shipping form and register by Wednesday, October 1st. Spouses and loved ones are welcome to participate with you.  

 Cheers!  

The post West Coast Mixology Event ( City by the Bay) appeared first on Black Diamond | OneStream Solutions.

]]>
Modernizing the Federal Budget Formulation https://blackdiamondadvisory.com/2020/10/06/modernizing-the-federal-budget-formulation/ Tue, 06 Oct 2020 02:17:09 +0000 https://blackdiamondadvisory.com/?p=404 REGISTER NOW Local government agency finance leaders within the Office of Budget & Management (OMB) and Comptroller offices are under pressure to improve decision support, increase transparency, and create efficiencies. With continued scrutiny of fiscal spending and the tightening of legislative budgets, the formulation process is evolving from line item approvals to performance-based budgets to...

The post Modernizing the Federal Budget Formulation appeared first on Black Diamond | OneStream Solutions.

]]>
REGISTER NOW

Local government agency finance leaders within the Office of Budget & Management (OMB) and Comptroller offices are under pressure to improve decision support, increase transparency, and create efficiencies. With continued scrutiny of fiscal spending and the tightening of legislative budgets, the formulation process is evolving from line item approvals to performance-based budgets to provide greater visibility into longer-term return on investment. 

To address these increasing demands, finance leaders are re-evaluating their legacy corporate performance management (CPM) tools and are modernizing with CPM 2.0 solutions. CPM 2.0 applications move finance transformation forward by unifying & aligning detailed operational plans, capital spending and workplace plans with financial results and reporting—all in a single application. 

During this webinar you will learn how to: 

  • Reduce risk with built-in Financial Data Quality 
  • Manage the Budget Formulation Process 
  • Automate the budget book 
  • Enable users with self- service visualizations & reporting 

 

 Randy Werder – President, Black Diamond 

Tyler Rodichok – Manager, Deloitte 

John O’Rourke – VP of Product Marketing and Communications, OneStream Software 

 

REGISTER NOW

The post Modernizing the Federal Budget Formulation appeared first on Black Diamond | OneStream Solutions.

]]>
Selecting a Performance Management Software with Your Head in the Clouds https://blackdiamondadvisory.com/2020/09/30/selecting-a-performance-management-software-with-your-head-in-the-clouds/ Wed, 30 Sep 2020 20:16:07 +0000 https://blackdiamondadvisory.com/?p=326 How OneStream Software offers the most customer-focused cloud offering in the industry. “You’ve gotta start with the customer experience and work backward to the technology. You can’t start with the technology and figure out where you are going to sell it.” – Steve Jobs, 1997 After years of being a happy OneStream customer and gaining...

The post Selecting a Performance Management Software with Your Head in the Clouds appeared first on Black Diamond | OneStream Solutions.

]]>
How OneStream Software offers the most customer-focused cloud offering in the industry.

“You’ve gotta start with the customer experience and work backward to the technology. You can’t start with the technology and figure out where you are going to sell it.” – Steve Jobs, 1997

After years of being a happy OneStream customer and gaining insider insight into the SaaS software industry, I want to set the record straight on cloud offerings in the Corporate Performance Management market. In this article, I will attempt to give a detailed review of a client experience utilizing OneStream’s Azure cloud solution and compare/contrast this with the current “True SaaS” solutions.

Business Leaders evaluating cloud software have important questions to ask. The first should be to ask how the product produces the best outcome for your business and what are the key criteria that should be used to measure performance and reliability in cloud offerings. Below are a few categories that evaluators should consider:

Other Considerations

Incentives: One of the “secrets” of cloud software is that there are incentives at play when selling the solution. One is that the vendor often believes the costs of the cloud to support the product are trending downward for either option. Investor expectations often include an assumption that the “True SaaS” products have steady or increasing license costs with a declining cost of sales. With OneStream, the customer reaps the full benefit of cloud cost reductions or performance improvements during this process. Carefully consider which benefits your company more.

Change Control: It is true that changes to the “True Saas” solutions updates have historically been more frequent. While this sounds like a benefit, consider this carefully. This is an exciting feature for software with a limited scope. For software meeting complex and diverse use cases it can be dangerous. Customers in these areas often have situations that were difficult to foresee in the vendor’s testing. Vendor quality control is essential in both categories, and deployment timing
flexibility may be an essential business requirement. Many wish to retain control of updates during yearend, acquisitions, restructuring, and legal hold situations. A vendor may claim zero defects, but past production rollouts that impacted the entire customer base adversely may tell a different story. Understand your testing requirements and windows.

Performance: It is critical to discover if your prospective solution meets your speed and consistency expectations. This can be very difficult to ascertain because “True SaaS” solutions can have terms and conditions that prohibit conducting benchmarking or disclosing any results. Make sure to understand if your solution performance slows during other customers’ peak activity. I have seen examples where a frustrated customer called support to hear than they needed to wait until other jobs have finished, and even one requesting that a customer reschedule high overhead jobs to a more opportune time. Diligently review the limitations of acceptable use, interview peers that are live, and incorporate dispute resolution in your contract.

Lock-in: Lock-in can be a challenge for any software. It can be an easy subject to miss during evaluation and can be a big challenge in the future. Evaluate how
difficult it is to extract 100% of your data, attachments, etc. Products vary significantly on this, and some could be described as “designed for lock-in”. If your product is sold to another vendor or supported with a different cloud what impacts does it have to your business? There are sometimes preferences of businesses to avoid one provider over another.

Security: Security can be very good with either option, though OneStream’s cloud offering approach is widely accepted to be the most secure. OneStream has been certified at leading levels (FedRAMP) that I have not seen other vendors match. Independently verify your expectations and your vendor’s offering when it is relevant to you.

Scalability: Scalability can be excellent with either option. In instances where it is at the discretion of the vendor, consider the vendor’s reputation, and ensure that
expectations are clear. In use cases where the load is evenly spread over time “true SaaS” solutions can be very cost-efficient and yield high performance. In
performance management and ERPs, there is often concurrent demand (nearly everyone closes monthly in the first days or weeks) that makes it difficult to reap
the full benefit. With OneStream, the cloud resources are determined by the customers’ use case and is dedicated.

Summary

As always, consider it essential to talk with multiple customers that have similar complexity and are live on any prospective solution. Ask your vendor for a 100% list of customers who have purchased the product (live or not) and talk with as many as you can.

OneStream’s cloud offering is the most customer-focused approach in the marketplace. While other vendors may disparage the approach, it is the one that
provides the power, reliability, and consistency needed for large enterprises. Full SaaS solutions have incentives present that can be a detriment to the customer. With OneStream customers retain control of their data, performance, and options in the future. Many will find this favorable compared to other researched alternative companies limiting expenses.

The post Selecting a Performance Management Software with Your Head in the Clouds appeared first on Black Diamond | OneStream Solutions.

]]>