Why Use R for Business Case Analysis?

Using R language for Business Case AnalysisBusiness case analyses that are typically developed in spreadsheets are fraught with a lack of transparency and prone to propagating significant coding errors. The R programming language provides a better alternative for creating clear and minimal-error analysis.

Even if you are new to R, you most likely have noticed that R is used almost exclusively for statistical analysis, as it’s described by The R Project for Statistical Computing. Most people who use R do not frequently employ it for the type of inquiry that business case analysts use spreadsheets to select projects to implement, make capital allocation decisions, or justify strategic pursuits. The statistical analysis from R might inform those decisions, but most business case analysts don’t employ R for those types of activities.

Obviously, as the heading of this section suggests, I am recommending a different approach from the status quo. I’m not just suggesting that R might be a useful replacement for spreadsheets; rather, I’m suggesting that better alternatives to spreadsheets be found for doing business case analysis. I think R is a great candidate. Before I explain why, let me explain why I don’t like spreadsheets.

Think about how spreadsheets communicate information. They essentially use three layers of presentation:

  1. Tabulation
  2. Formulation
  3. Logic

When we open a spreadsheet, usually the first thing we see are tables and tables of numbers. The tables might have explanatory column and row headers. The cells might have descriptive comments inserted to provide some deeper explanation. Failure to provide these explanatory clues represents more a failing of the spreadsheet developer’s communication abilities than a failing of the spreadsheet environment, but even with the best of explanations, the pattern that emerges from the values in the cells can be difficult to discern. Fortunately, spreadsheet developers can supply graphs of the results, but even those can be misleading chart junk. Even when charts are well constructed, their placement in models often doesn’t clearly indicate which array of values is being graphed, presenting an exercise for the user to determine.

To understand how the numbers arise, we might ask about the formulas . By clicking in a cell we can see the formulas used, but unfortunately the situation here is even worse than the prior level of presentation of tables of featureless numbers. Here, we don’t see formulas written in a form that reveals underlying meaning; rather, we see formulas constructed by pointing to other cell locations on the sheet. We do not see easily how intermediate calculations relate to other intermediate calculations. As such, spreadsheet formulation is inherently tied to the structural layout of the spreadsheet, not necessarily one that reveals the inherent relationship of the ideas it encodes. This is like saying that the meaning within a book is related to its placement on a bookshelf, not the development of the ideas it contains.

Although the goal of good analysis should not be more complex models, a deeper inquiry into a subject usually does create a need for some level of complexity that exceeds the simplistic. As a spreadsheet grows in complexity, though, it becomes increasingly difficult to extend the size of tables (both by length of indexes that structure them and the number of indexes used to configure the dimensionality) as a direct consequence of its current configuration. Furthermore, if we need to add new tables, choosing where to place them and how to configure them also depends almost entirely on the placement and configuration of previously constructed tables . So, as the complexity of a spreadsheet increases, it naturally leads to less flexibility in the way the model can be represented. It becomes crystallized by the development of its own real estate.

The cell referencing formulation method also increases the likelihood of error propagation, because formulas are generally written in a granular manner that requires the formula to be written across every element in at least one index of a table’s organizing structure. Usually, the first instance of a required formula is written within one cell in the table; it is then copied to all the appropriate adjacent cells. If the first formula is incorrect, all the copies will be wrong, too. If the formula is sufficiently long and complex, reading it to properly debug it becomes very difficult. Really, the formula doesn’t have to be that complicated or the model that complex for this kind of failure to occur, as the recent London Whale VaR model and Reinhart-Rogoff Study On Debt debacles demonstrated. Of course, many of these problems can be overcome by analysts agreeing on a quality and style convention . Even though several of these conventions are available for reuse, they are seldom employed in a consistent manner (if at all) within an organization, and certainly not across similar commercial and academic environments.

All of this builds to the most important failure of spreadsheets–the failure to clearly communicate the underlying meaning and logic of the analytic model. The first layer visually presents the numbers, but the patterns in them are difficult to discern unless good graphical representations are employed with clear references back to the data used to construct them. The second layer, which is only visible if requested, uses an arcane formulation language that seems inherently unrelated to the actual nature of the analysis and the internal concepts that link inputs to outputs. The final layer–the logic, the meaning, the essence of the model–is left almost entirely to the inference capability of any user, other than the developer, who happens to need to use the model. The most important layer is the most ambiguous, the least obvious. I think the order should be the exact opposite.

How to impliment R language for Business  Analysis

When I bring up these complaints, the first response I usually get is, “Rob! Can’t we just eat our dinner without you complaining about spreadsheets again?” When my dinner company tends to look more like fellow analysts, though, I get, “So what? Spreadsheets are cheap and ubiquitous. Everyone has one, and just about anyone can figure out how to put numbers in them. I can give my analysis to anyone, and anyone can open it up and read it.”

Free, ubiquitous, and easy to use are all great characteristics of some things in their proper context, but they aren’t characteristics that are necessarily universally beneficial for decision aiding systems, especially for organizations in which complex ideas are formulated, tested, revisited, communicated, and refactored for later use. Why? Because those three characteristics aren’t the attributes that create and transfer value. Free, ubiquitous, and easy to use might have value, but the real value comes from the way in which logic is clearly constructed, communicated, stress tested, and controlled for errors.

I know that what most people have in mind with the common response I receive are the low cost of entry to the use of spreadsheets and the relative ease of use for creating reports (for which I think spreadsheets are excellent, by the way). Considering the shortcomings and failures of spreadsheets based on the persistent errors I’ve seen in client spreadsheets and the humiliating ones I’ve created, I think the price of cheap is too high. The answer to the first part of their objection–spreadsheets are cheap–is that R is free; freer, in fact, than spreadsheets. In some sense, it’s even easier to use because the formulation layer can be written directly in a simple text file without intermediate development environments. Of course, R is not ubiquitous, but it is freely available on the Internet to download and install for immediate use.

Unlike spreadsheets, R is a programming language with the built-in capacity to operate over arrays as if they were whole objects, a feature that demolishes any justification for the granular cell-referencing syntax of spreadsheets. Consider the following example.

Suppose we want to model a simple parabola over the interval (–10, 10). In R, we might start by defining an index we call x.axis as an integer series.

x.axis <- -10:10

which looks like this,

[1] -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10

when we call x.axis.

To define a simple parabola, we then write a formula that we might define as

parabola <- x.axis^2

which produces, as you might now expect, a series that looks like this:

[1] 100 81 64 49 36 25 16 9 4 1 0 1 4 9 16 25 36 49 64 81 100.

Producing this result in R required exactly two formulas. A typical spreadsheet that replicates this same example requires manually typing in 21 numbers and then 21 formulas, each pointing to the particular value in the series we represented with x.axis. The spreadsheet version produces 42 opportunities for error. Even if we use a formula to create the spreadsheet analog of the x.axis values , the number of opportunities for failure remains the same.

Extending the range of parabola requires little more than changing the parameters in the x.axis definition. No additional formulas need to be written, which is not the case if we needed to extend the same calculation in our spreadsheet. There, more formulas need to be written, and the number of potential opportunities for error continues to increase.

The number of formula errors that are possible in R is directly related to the total number of formula parameters required to correctly write each formula. In a spreadsheet, the number of formula errors is a function of both the number of formula parameters and the number of cell locations needed to represent the full response range of results. Can we make errors in R-based analysis? Of course, but the potential for those errors is exponentially larger in spreadsheets.

As we’ve already seen, too, R operates according to a linear flow that guides the development of logic. Also, variables can be named in a way that makes sense to the context of the problem5 so that the program formulation and business logic are more closely merged, reducing the burden of inference about the meaning of formulas for auditors and other users. In Chapter 2, I’ll present a style guide that will help you maintain clarity in the definition of variables, functions, and files.

Although R answers the concerns of direct cost and the propagation of formula errors , its procedural language structure presents a higher barrier to improper use because it requires a more rational, structured logic than is required by spreadsheets, requiring a rigor that people usually learn from programming and software design. The best aspect of R is that it communicates the formulation and logic layer of an analysis in a more straightforward manner as the procedural instructions for performing calculations. It preserves the flow of thought that is necessary to move from starting assumptions to conclusions. The numerical layer is presented only when requested, but logic and formulation are more visibly available. As we move forward through this tutorial, I’ll explain more about how these features present themselves for effective business case analysis.

 

Вас заинтересует / Intresting for you:

R and RStudio: first steps for...
R and RStudio: first steps for... 1478 views Боба Thu, 17 May 2018, 17:20:26
 What Is R? ... and for what? ...
What Is R? ... and for what? ... 2979 views Светлана Комарова Tue, 17 Sep 2019, 06:09:03
Business Analytics: IT Use Cas...
Business Analytics: IT Use Cas... 1580 views Даниил Fri, 05 Oct 2018, 16:48:44
User Discovery in the process ...
User Discovery in the process ... 1451 views Akmaral Sat, 14 Jul 2018, 08:36:53
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations