| Part of aseries on | ||||||
| Finance | ||||||
|---|---|---|---|---|---|---|
| ||||||
Financial economics is the branch ofeconomics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear onboth sides of a trade".[1] Its concern is thus the interrelation of financial variables, such asshare prices,interest rates andexchange rates, as opposed to those concerning thereal economy. It has two main areas of focus:[2]asset pricing andcorporate finance; the first being the perspective of providers ofcapital, i.e. investors, and the second of users of capital.It thus provides the theoretical underpinning for much offinance.
The subject is concerned with "the allocation and deployment of economic resources, both spatially and across time, in an uncertain environment".[3][4] It therefore centers on decision making under uncertainty in the context of the financial markets, and the resultanteconomic andfinancial models and principles, and is concerned with deriving testable or policy implications from acceptable assumptions. It thus also includes a formal study of thefinancial markets themselves, especiallymarket microstructure andmarket regulation.It is built on the foundations ofmicroeconomics anddecision theory.
Financial econometrics is the branch of financial economics that useseconometric techniques to parameterise the relationships identified.Mathematical finance is related in that it will derive and extend the mathematical or numerical models suggested by financial economics.Whereas financial economics has a primarily microeconomic focus,monetary economics is primarilymacroeconomic in nature.
| Fundamental valuation equation[5] |
Four equivalent formulations,[6] where:
|
Financial economics studies howrational investors would applydecision theory toinvestment management. The subject is thus built on the foundations ofmicroeconomics and derives several key results for the application ofdecision making under uncertainty to thefinancial markets. The underlying economic logic yields thefundamental theorem of asset pricing, which gives the conditions forarbitrage-free asset pricing.[6][5]The various "fundamental" valuation formulae result directly.
Underlying all of financial economics are the concepts ofpresent value andexpectation.[6]
Calculating their present value, in the first formula, allows the decision maker to aggregate thecashflows (or other returns) to be produced by the asset in the future to a single value at the date in question, and to thus more readily compare two opportunities; this concept is then the starting point for financial decision making.[note 1](Note that here, "" represents a generic (or arbitrary)discount rate applied to the cash flows, whereas in the valuation formulae, therisk-free rate is applied once these have been "adjusted" for their riskiness; see below.)
An immediate extension is to combine probabilities with present value, leading to theexpected value criterion which sets asset value as a function of the sizes of the expected payouts and the probabilities of their occurrence, and respectively.[note 2]
This decision method, however, fails to considerrisk aversion. In other words, since individuals receive greaterutility from an extra dollar when they are poor and less utility when comparatively rich, the approach is therefore to "adjust" the weight assigned to the various outcomes, i.e. "states", correspondingly:. Seeindifference price. (Some investors may in fact berisk seeking as opposed torisk averse, but the same logic would apply.)
Choice under uncertainty here may then be defined as the maximization ofexpected utility. More formally, the resultingexpected utility hypothesis states that, if certain axioms are satisfied, thesubjective value associated with a gamble by an individual isthat individual'sstatistical expectation of the valuations of the outcomes of that gamble.
The impetus for these ideas arises from various inconsistencies observed under the expected value framework, such as theSt. Petersburg paradox and theEllsberg paradox.[note 3]
| JEL classification codes |
In theJournal of Economic Literature classification codes, Financial Economics is one of the 19 primary classifications, at JEL: G. It followsMonetary andInternational Economics and precedesPublic Economics.The New Palgrave Dictionary of Economics also uses the JEL codes to classify its entries. The primary and secondary JEL categories are:
Each is further divided into its tertiary categories. |
The concepts ofarbitrage-free, "rational", pricing and equilibrium are then coupled[10]with the above to derive various of the "classical"[11] (or"neo-classical"[12]) financial economics models.
Rational pricing is the assumption that asset prices (and hence asset pricing models) will reflect thearbitrage-free price of the asset, as any deviation from this price will bearbitraged away: the"law of one price". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.
Economic equilibrium is a state in which economic forces such as supply and demand are balanced, and in the absence of external influences these equilibrium values of economic variables will not change.General equilibrium deals with the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium. (This is in contrast to partial equilibrium, which only analyzes single markets.)
The two concepts are linked as follows: where market prices arecomplete and do not allow profitable arbitrage, i.e. they comprise an arbitrage-free market, then these prices are also said to constitute an "arbitrage equilibrium". Intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and they are therefore not in equilibrium.[13] An arbitrage equilibrium is thus a precondition for a general economic equilibrium.
"Complete" here means that there is a price for every asset in every possible state of the world,, and that the complete set of possible bets on future states-of-the-world can therefore be constructed with existing assets (assumingno friction): essentiallysolving simultaneously forn (risk-neutral) probabilities,, givenn prices. For a simplified example seeRational pricing § Risk neutral valuation, where the economy has only two possible states – up and down – and where and (=) are the two corresponding probabilities, and in turn, the derived distribution, or"measure".
The formal derivation will proceed by arbitrage arguments.[6][13][10] The analysis here is often undertaken to assume arepresentative agent,[14] essentially treating all market participants, "agents", as identical (or, at least, assuming that theyact in such a way that the sum of their choices is equivalent to the decision of one individual) with the effect thatthe problems are then mathematically tractable.
With this measure in place, the expected,i.e. required, return of any security (or portfolio) will then equal the risk-free return, plus an "adjustment for risk",[6] i.e. a security-specificrisk premium, compensating for the extent to which its cashflows are unpredictable. All pricing models are then essentially variants of this, given specific assumptions or conditions.[6][5][15] This approach is consistent withthe above, but with the expectation based on "the market" (i.e. arbitrage-free, and, per the theorem, therefore in equilibrium) as opposed to individual preferences.
Continuing the example, in pricing aderivative instrument, its forecasted cashflows in the abovementioned up- and down-states and, are multiplied through by and, and are thendiscounted at the risk-free interest rate; per the second equation above. In pricing a "fundamental", underlying, instrument (in equilibrium), on the other hand, a risk-appropriate premium over risk-free is required in the discounting, essentially employing the first equation with and combined. This premium may be derived by theCAPM (or extensions) as will be seen under§ Uncertainty.
The difference is explained as follows: By construction, the value of the derivative will (must) grow at the risk free rate, and, by arbitrage arguments, its value must then be discounted correspondingly; in the case of an option, this is achieved by "manufacturing" the instrument as a combination of theunderlying and a risk free "bond"; seeRational pricing § Delta hedging (and§ Uncertainty below). Where the underlying is itself being priced, such "manufacturing" is of course not possible – the instrument being "fundamental", i.e. as opposed to "derivative" – and a premium is then required for risk.
(Correspondingly, mathematical finance separates intotwo analytic regimes:risk and portfolio management (generally) usephysical- (or actual or actuarial) probability, denoted by "P"; while derivatives pricing uses risk-neutral probability (or arbitrage-pricing probability), denoted by "Q".In specific applications the lower case is used, as in the above equations.)
With the above relationship established, the further specializedArrow–Debreu model may be derived.[note 4]This result suggests that, under certain economic conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy. The Arrow–Debreu model applies to economies with maximallycomplete markets, in which there exists a market for every time period and forward prices for every commodity at all time periods.
A direct extension, then, is the concept of astate price security, also called an Arrow–Debreu security, a contract that agrees to pay one unit of anumeraire (a currency or a commodity) if a particular state occurs ("up" and "down" in the simplified example above) at a particular time in the future and pays zero numeraire in all the other states. The price of this security is thestate price of this particular state of the world; the collection of these is also referred to as a "Risk Neutral Density".[19]
In the above example, the state prices,,would equate to the present values of and: i.e. what one would pay today, respectively, for the up- and down-state securities; thestate price vector is the vector of state prices for all states. Applied to derivative valuation, the price today would simply be[× +×]: the fourth formula (see above regarding the absence of a risk premium here). For acontinuous random variable indicating a continuum of possible states, the value is found byintegrating over the state price "density".
State prices find immediate application as a conceptual tool ("contingent claim analysis");[6] but can also be applied to valuation problems.[20] Given the pricing mechanism described, one can decompose the derivative value – true in fact for "every security"[2] – as a linear combination of its state-prices; i.e. back-solve for the state-prices corresponding to observed derivative prices.[21][20][19] These recovered state-prices can then be used for valuation of other instruments with exposure to the underlyer, or for other decision making relating to the underlyer itself.
Using the relatedstochastic discount factor - SDF; also called the pricing kernel - the asset price is computed by "discounting" the future cash flow by the stochastic factor, and then taking the expectation;[15][22] the third equation above. Essentially, this factor divides expectedutility at the relevant future period - a function of the possible asset values realized under each state - by the utility due to today's wealth, and is then also referred to as "the intertemporalmarginal rate of substitution".Correspondingly, the SDF,, may be thought of as the discounted value of Risk Aversion,(The latter may be inferred via the ratio of risk neutral- to physical-probabilities,SeeGirsanov theorem andRadon-Nikodym derivative.)
Applying the above economic concepts, we may then derive variouseconomic- and financial models and principles. As outlined, the two areas of focus are Asset Pricing and Corporate Finance, the first being the perspective of providers of capital, the second of users of capital. Here, and for (almost) all other financial economics models, the questions addressed are typically framed in terms of "time, uncertainty, options, and information",[1][14] as will be seen below.
Applying this framework, with the above concepts, leads to the required models. This derivation begins with the assumption of "no uncertainty" and is then expanded to incorporate the other considerations.[4] (This division sometimes denoted "deterministic" and "random",[23] or "stochastic".)
Bond valuation formula where Coupons and Face value are discounted at the appropriate rate, "i": typically reflecting a spread over the risk free rateas a function of credit risk; often quoted as a "yield to maturity". See body for discussion re the relationship with the above pricing formulae. |
DCF valuation formula, where thevalue of the firm, is its forecastedfree cash flows discounted to the present using theweighted average cost of capital, i.e.cost of equity andcost of debt, with the former (often) derived using the below CAPM.Forshare valuation investors use the relateddividend discount model. |

The starting point here is "Investment under certainty", and usually framed in the context of a corporation.TheFisher separation theorem, asserts that the objective of the corporation will be the maximization of its present value, regardless of the preferences of its shareholders. Related is theModigliani–Miller theorem, which shows that, under certain conditions, the value of a firm is unaffected byhow that firm is financed, and depends neither on itsdividend policy norits decision to raise capital by issuing stock or selling debt. The proof here proceeds using arbitrage arguments, and acts as a benchmark[10] for evaluating the effects of factors outside the model that do affect value.[note 5]
The mechanism for determining (corporate) value is provided by[26][27]John Burr Williams'The Theory of Investment Value, which proposes that the value of an asset should be calculated using "evaluation by the rule of present worth". Thus, for a common stock, the"intrinsic", long-term worth is the present value of its future net cashflows, in the form ofdividends; inthe corporate context, "free cash flow" as aside. What remains to be determined is the appropriate discount rate. Later developments show that, "rationally", i.e. in the formal sense, the appropriate discount rate here will (should) depend on the asset's riskiness relative to the overall market, as opposed to its owners' preferences; see below.Net present value (NPV) is the direct extension of these ideas typically applied to Corporate Finance decisioning. For other results, as well as specific models developed here, see the list of "Equity valuation" topics underOutline of finance § Discounted cash flow valuation.[note 6]
Bond valuation, in that cashflows (coupons and return of principal, or "Face value") are deterministic, may proceed in the same fashion.[23] An immediate extension,Arbitrage-free bond pricing, discounts each cashflow at the market derived rate – i.e. at each coupon's correspondingzero rate, and of equivalent credit worthiness – as opposed to an overall rate.In many treatments bond valuation precedesequity valuation, under which cashflows (dividends) are not "known"per se. Williams and onward allow for forecasting as to these – based onhistoric ratios or publisheddividend policy – and cashflows are then treated as essentially deterministic; see below under§ Corporate finance theory.
For both stocks and bonds, "under certainty, with the focus on cash flows from securities over time," valuation based on aterm structure of interest rates is in fact consistent with arbitrage-free pricing.[28]Indeed, a corollary ofthe above is that "the law of one price implies the existence of a discount factor";[29]correspondingly, as formulated,.
Whereas these "certainty" results are all commonly employed under corporate finance, uncertainty is the focus of "asset pricing models" as follows.Fisher's formulation of the theory here - developingan intertemporal equilibrium model - underpins also[26] the below applications to uncertainty;[note 7]see[30] for the development.


The capital asset pricing model (CAPM): Theexpected return used when discounting cashflows on an asset, is the risk-free rate plus themarket premium multiplied bybeta(), the asset's correlated volatility relative to the overall market. |

For"choice under uncertainty" the twin assumptions of rationality andmarket efficiency, as more closely defined, lead tomodern portfolio theory (MPT) with itscapital asset pricing model (CAPM) – anequilibrium-based result – and to theBlack–Scholes–Merton theory (BSM; often, simply Black–Scholes) foroption pricing – anarbitrage-free result. As above, the (intuitive) link between these, is that the latter derivative prices are calculated such that they are arbitrage-free with respect to the more fundamental, equilibrium determined, securities prices; seeAsset pricing § Interrelationship.
Briefly, and intuitively – and consistent with§ Arbitrage-free pricing and equilibrium above – the relationship between rationality and efficiency is as follows.[31] Given the ability to profit fromprivate information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more "correct", i.e.efficient, prices: theefficient-market hypothesis, or EMH. Thus, if prices of financial assets are (broadly) efficient, then deviations from these (equilibrium) values could not last for long. (Seeearnings response coefficient.)The EMH (implicitly) assumes that average expectations constitute an "optimal forecast", i.e. prices using all available information are identical to thebest guess of the future: the assumption ofrational expectations. The EMH does allow that when faced with new information, some investors may overreact and some may underreact,[32]but what is required, however, is that investors' reactions follow anormal distribution – so that the net effect on market prices cannot be reliably exploited[32] to make an abnormal profit.In the competitive limit, then, market prices will reflect all available information and prices can only move in response to news:[33] therandom walk hypothesis. This news, of course, could be "good" or "bad", minor or, less common, major; and these moves are then, correspondingly, normally distributed; with the price therefore following a log-normal distribution.[note 8]
Under these conditions, investors can then be assumed to act rationally: their investment decision must be calculated or a loss is sure to follow;[32] correspondingly, where an arbitrage opportunity presents itself, then arbitrageurs will exploit it, reinforcing this equilibrium.Here, as under the certainty-case above, the specific assumption as to pricing is that prices are calculated as the present value of expected future dividends,[5][33][14] as based on currently available information.What is required though, is a theory for determining the appropriate discount rate, i.e. "required return", given this uncertainty: this is provided by the MPT and its CAPM. Relatedly, rationality – in the sense of arbitrage-exploitation – gives rise to Black–Scholes; option values here ultimately consistent with the CAPM.
In general, then, while portfolio theory studies how investors should balance risk and return when investing in many assets or securities, the CAPM is more focused, describing how, in equilibrium, markets set the prices of assets in relation to how risky they are.[note 9]This result will be independent of the investor's level of risk aversion and assumedutility function, thus providing a readily determined discount rate for corporate finance decision makersas above,[36] and for other investors.The argumentproceeds as follows:[37] If one can construct anefficient frontier – i.e. each combination of assets offering the best possible expected level of return for its level of risk, see diagram – then mean-variance efficient portfolios can be formed simply as a combination of holdings of therisk-free asset and the "market portfolio" (theMutual fund separation theorem), with the combinations here plotting as thecapital market line, or CML. Then, given this CML, the required return on a risky security will be independent of the investor'sutility function, and solely determined by itscovariance ("beta") with aggregate, i.e. market, risk. This is because investors here can then maximize utility through leverage as opposed to stock selection; seeSeparation property (finance),Markowitz model § Choosing the best portfolio and CML diagram aside. As can be seen in the formula aside, this result is consistent withthe preceding, equaling the riskless return plus an adjustment for risk.[5] A more modern, direct, derivation is as described at the bottom of this section; which can be generalized to deriveother equilibrium-pricing models.

The Black–Scholes equation: |
The Black–Scholes formula for the value of a call option: |
Black–Scholes provides a mathematical model of a financial market containingderivative instruments, and the resultant formula for the price ofEuropean-styled options.[note 10]The model is expressed as the Black–Scholes equation, apartial differential equation describing the changing price of the option over time; it is derived assuming log-normal,geometric Brownian motion (seeBrownian model of financial markets).The key financial insight behind the model is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk", absenting the risk adjustment from the pricing (, the value, or price, of the option, grows at, the risk-free rate).[6][5]This hedge, in turn, implies that there is only one right price – in an arbitrage-free sense – for the option. And this price is returned by the Black–Scholes option pricing formula. (The formula, and hence the price, is consistent with the equation, as the formula is thesolution to the equation.)Since the formula is without reference to the share's expected return, Black–Scholes inheres risk neutrality; intuitively consistent with the "elimination of risk" here, and mathematically consistent with§ Arbitrage-free pricing and equilibrium above. Relatedly, therefore, the pricing formulamay also be derived directly via risk neutral expectation.Itô's lemma providesthe underlying mathematics, and, withItô calculus more generally, remains fundamental in quantitative finance.[note 11]
As implied by the Fundamental Theorem,the two major results are consistent. Here, the Black-Scholes equation can alternatively be derived from the CAPM, and the price obtained from the Black–Scholes model is thus consistent with the assumptions of the CAPM.[46][12] The Black–Scholes theory, although built on Arbitrage-free pricing, is therefore consistent with the equilibrium based capital asset pricing. Both models, in turn, are ultimately consistent with the Arrow–Debreu theory, and can be derived via state-pricing – essentially, by expanding the above fundamental equations – further explaining, and if required demonstrating, this consistency.[6] Here, the CAPM is derived[15] by linking, risk aversion, to overall market return, and setting the return on security as; seeStochastic discount factor § Properties.The Black–Scholes formula is found,in the limit,[47] by attaching abinomial probability[10] to each of numerous possiblespot-prices (i.e. states) and then rearranging for the terms corresponding to and, per the boxed description; seeBinomial options pricing model § Relationship with Black–Scholes.
More recent work further generalizes and extends these models. As regardsasset pricing, developments in equilibrium-based pricing are discussed under "Portfolio theory" below, while "Derivative pricing" relates to risk-neutral, i.e. arbitrage-free, pricing. As regards the use of capital, "Corporate finance theory" relates, mainly, to the application of these models.


Much of the development here relates to required return, i.e. pricing, extending the basic CAPM.Multi-factor models such as theFama–French three-factor model and theCarhart four-factor model, propose factors other than market return as relevant in pricing. Theintertemporal CAPM andconsumption-based CAPM similarly extend the model. Withintertemporal portfolio choice, the investor now repeatedly optimizes her portfolio; while the inclusion ofconsumption (in the economic sense) then incorporates all sources of wealth, and not just market-based investments, into the investor's calculation of required return.
Whereas the above extend the CAPM, thesingle-index model is a more simple model. It assumes, only, a correlation between security and market returns, without (numerous) other economic assumptions. It is useful in that it simplifies the estimation of correlation between securities, significantly reducing the inputs for building the correlation matrix required for portfolio optimization.
Thearbitrage pricing theory (APT) similarly differs as regards its assumptions. APT "gives up the notion that there is one right portfolio for everyone in the world, and ...replaces it with an explanatory model of what drives asset returns."[48] It returns the required (expected) return of a financial asset as a linear function of various macro-economic factors, and assumes that arbitrage should bring incorrectly priced assets back into line.[note 12]
The linear factormodelstructure of the APT is used as the basis for many of the commercial risk andfund management systems employed by asset managers. Here,[51][52] managers apply various of the abovementioned multi-factor models - oftenbespoke extensions - such that their portfolio has the desired exposure ("tilt") to macroeconomic, market and / or fundamentalrisk factors;[53] respectively:Macro-,Factor-, andStyle Funds.Research here, and application (e.g. "Smart Beta"), is ongoing, and over the years "hundreds of factors attempt to explain the cross-section of expected returns";[54]with this "factor zoo",[55]there is a risk ofdata mining, and researchers have proposed various criteria forestablishing significance.
At the same time as these, "classic"mean-variance optimization — i.e. building anefficient portfolio as describedabove — is still widely used byAsset Allocation Funds.[56] Here, givenissues noted with this approach, the application is typically in combination with other techniques.TheBlack–Litterman model[57] is often employed.Black–Litterman departs from the originalMarkowitz model approach: it starts with an equilibrium assumption, as for the latter, but this is then modified to take into account the "views" (i.e., the specific opinions about asset returns) of the investor in question to arrive at a bespoke[58] asset allocation. A further modification often used[59] relates to the subsequent optimization: under(Tail)risk parity, the focus is on the allocation of risk, rather than the allocation of capital.
Other developments reportfolio optimization include the following. Where factors additional to volatility are considered (kurtosis, skew...) thenmultiple-criteria decision analysis can be applied; here deriving aPareto efficient portfolio. Theuniversal portfolio algorithm identifies the "growth optimal portfolio"per the Kelly criterion, applyinginformation theory to asset selection.Behavioral portfolio theory recognizes that investors have varied aims and create an investment portfolio that meets a broad range of goals. Copulas havelately been applied here; recently this is the case alsofor genetic algorithms and[60]machine learning, more generally; seebelow.[61]

PDE for a zero-coupon bond: Interpretation: Analogous to Black–Scholes,[62]arbitrage arguments describe the instantaneous change in the bond price for changes in the (risk-free) short rate; the analyst selects the specificshort-rate model to be employed. |

In pricing derivatives, thebinomial options pricing model provides a discretized version of Black–Scholes, useful for the valuation ofAmerican styled options. Discretized models of this type are built – at least implicitly – using state-prices (as above); relatedly, a large number of researchershave used options to extract state-prices for a variety of other applications in financial economics.[6][46][21] Forpath dependent derivatives,Monte Carlo methods for option pricing are employed; here the modelling is in continuous time, but similarly uses risk neutral expected value. Variousother numeric techniques have also been developed. The theoretical framework too has been extended such thatmartingale pricing is now the standard approach.[note 13]
Drawing on these techniques, models for various other underlyings and applications have also been developed, all based on the same logic (using "contingent claim analysis").Real options valuation allows that option holders can influence the option's underlying; models foremployee stock option valuation explicitly assume non-rationality on the part of option holders;Credit derivatives allow that payment obligations or delivery requirements might not be honored.Exotic derivatives are now routinely valued. Multi-asset underlyers are handled via simulation orcopula based analysis.
Similarly, the variousshort-rate models allow for an extension of these techniques tofixed income- andinterest rate derivatives. (TheVasicek andCIR models are equilibrium-based, whileHo–Lee and subsequent models are based on arbitrage-free pricing.) The more generalHJM Framework describes the dynamics of the fullforward-rate curve – as opposed to working with short rates – and is then more widely applied. The valuation of the underlying instrument – additional to its derivatives – is relatedly extended, particularly forhybrid securities, where credit risk is combined with uncertainty re future rates; seeBond valuation § Stochastic calculus approach andLattice model (finance) § Hybrid securities.[note 14]
Following theCrash of 1987, equity options traded in American markets began to exhibit what is known as a "volatility smile"; that is, for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices, and thusimplied volatilities, than what is suggested by BSM. (The pattern differs across various markets.) Modelling the volatility smile is an active area of research, and developments here – as well as implications re the standard theory – are discussedin the next section.
After the2008 financial crisis, a further development:[71] as outlined, (over the counter) derivative pricing had relied on the BSM risk neutral pricing framework, under the assumptions of funding at the risk free rate and the ability to perfectly replicate cashflows so as to fully hedge. This, in turn, is built on the assumption of a credit-risk-free environment – called into question during the crisis. Addressing this, therefore, issues such ascounterparty credit risk, funding costs and costs of capital are now additionally considered when pricing,[72] and acredit valuation adjustment, or CVA – and potentially othervaluation adjustments, collectivelyxVA – is generally added to the risk-neutral derivative value.The standard economic arguments can be extended to incorporate these various adjustments.[73]
A related, and perhaps more fundamental change, is that discounting is now on theOvernight Index Swap (OIS) curve, as opposed toLIBOR as used previously.[71] This is because post-crisis, theovernight rate is considered a better proxy for the "risk-free rate".[74] (Also, practically, the interest paid on cashcollateral is usually the overnight rate; OIS discounting is then, sometimes, referred to as "CSA discounting".)Swap pricing – and, therefore,yield curve construction – is further modified: previously, swaps were valued off a single "self discounting" interest rate curve; whereas post crisis, to accommodate OIS discounting, valuation is now under a "multi-curve framework" where "forecast curves" are constructed for each floating-legLIBOR tenor, with discounting on thecommon OIS curve.

Mirroring theabove developments, corporate finance valuations and decisioning no longer need assume "certainty".[note 15]Monte Carlo methods in finance allow financial analysts to construct "stochastic" orprobabilistic corporate finance models, as opposed to the traditional static anddeterministic models;[78] seeCorporate finance § Quantifying uncertainty. Relatedly,Real Options theory allows for owner – i.e. managerial – actions that impact underlying value: by incorporating option pricing logic, these actions are then applied to a distribution of future outcomes, changing with time, which then determine the "project's" valuation today.[79] More traditionally,decision trees – which are complementary – have been used to evaluate projects, by incorporating in the valuation (all)possible events (or states) and consequentmanagement decisions;[80][78] the correct discount rate here reflecting each decision-point's "non-diversifiable risk looking forward."[78]
Related to this, is the treatment of forecasted cashflows inequity valuation. In many cases, following Williamsabove, the average (or most likely) cash-flows were discounted,[81] as opposed to a theoretically correct state-by-state treatment under uncertainty; see comments underFinancial modeling § Accounting. In more modern treatments, then, it is theexpected cashflows (in themathematical sense:) combined into an overall value per forecast period which are discounted.[82][83][84][78]And using the CAPM – or extensions – the discounting here is at the risk-free rate plus a premium linked to the uncertainty of the entity or project cash flows[78](essentially, and combined).
Other developments here include[85]agency theory, which analyses the difficulties in motivating corporate management (the "agent"; in a different sense to the above) to act in the best interests of shareholders (the "principal"), rather than in their own interests; here emphasizingthe issues interrelated with capital structure.[86]Clean surplus accounting and the relatedresidual income valuation provide a model that returns price as a function of earnings, expected returns, and change inbook value, as opposed to dividends. This approach, to some extent, arises due to the implicit contradiction of seeing value as a function of dividends, while also holding that dividend policy cannot influence value per Modigliani and Miller's "Irrelevance principle"; seeDividend policy § Relevance of dividend policy.
"Corporate finance" as a discipline more generally, building on Fisherabove, relates to the long term objective of maximizing thevalue of the firm - and itsreturn to shareholders - and thus also incorporates the areas ofcapital structure anddividend policy.[87]Extensions of the theory here then also consider these latter, as follows:(i)optimization re capitalization structure, and theories here as to corporate choices and behavior:Capital structure substitution theory,Pecking order theory,Market timing hypothesis,Trade-off theory;(ii)considerations and analysis re dividend policy, additional to - and sometimes contrasting with - Modigliani-Miller, include: theWalter model,Lintner model,Residuals theory andsignaling hypothesis, as well as discussion re the observedclientele effect anddividend puzzle.
As described, the typical application of real options is tocapital budgeting type problems. However, here, they arealso applied to problems of capital structure and dividend policy, and to the related design of corporate securities;[88] and since stockholder and bondholders have different objective functions, in the analysis of therelated agency problems.[79] In all of these cases, state-prices can provide the market-implied information relating to the corporate,as above, which is then applied to the analysis. For example,convertible bonds can (must) be priced consistent with the (recovered) state-prices of the corporate's equity.[20][82]

The discipline, as outlined, also includes a formal study offinancial markets. Of interest especially are market regulation andmarket microstructure, and their relationship toprice efficiency.
Regulatory economics studies, in general, the economics of regulation. In the context of finance, it will address the impact offinancial regulation on the functioning of markets and the efficiency of prices, while also weighing the corresponding increases in market confidence andfinancial stability.Research here considers how, and to what extent, regulations relating to disclosure (earnings guidance,annual reports),insider trading, andshort-selling will impact price efficiency, thecost of equity, andmarket liquidity.[89]
Market microstructure is concerned with the details of how exchange occurs in markets (withWalrasian-,matching-,Fisher-, andArrow-Debreu markets as prototypes), and "analyzes how specific trading mechanisms affect theprice formation process",[90] examining the ways in which the processes of a market affect determinants oftransaction costs, prices, quotes, volume, and trading behavior.It has been used, for example, in providing explanations forlong-standing exchange rate puzzles,[91] and for theequity premium puzzle.[92] In contrast to the above classical approach, models here explicitly allow for (testing the impact of)market frictions and otherimperfections;see alsomarket design.
For both regulation[93] and microstructure,[94] and generally,[95]agent-based models can be developed[96] toexamine any impact due to a change in structure or policy - orto make inferences re market dynamics -by testing these in an artificial financial market, or AFM.[note 16]This approach, essentiallysimulated trade between numerousagents, "typically usesartificial intelligence technologies [oftengenetic algorithms andneural nets] to represent theadaptive behaviour of market participants".[96]
These'bottom-up' models "start from first principals of agent behavior",[97] with participants modifying their trading strategies having learned over time, and "are able to describe macro features [i.e.stylized facts]emerging from a soup of individual interacting strategies".[97]Agent-based models depart further from the classical approach — therepresentative agent, as outlined — in that they introduceheterogeneity into the environment (thereby addressing, also, theaggregation problem).
More recent research[61] focuses on the potential impact ofmachine learning on market functioning and efficiency.As these methods become more prevalent in financial markets, economists would expect greaterinformation acquisition and improved price efficiency.[98] In fact, an apparent rejection of market efficiency (seebelow) might simply represent "the unsurprising consequence of investors not having precise knowledge of the parameters of a data-generating process that involves thousands of predictor variables".[99]At the same time, it is acknowledged that a potential downside of these methods, in this context, is their lack ofinterpretability "which translates into difficulties in attaching economic meaning to the results found."[61]
As above, there is a very close link between:therandom walk hypothesis, with the associated belief that price changes should follow anormal distribution, on the one hand; and market efficiency andrational expectations, on the other.Wide departures from these are commonly observed, and there are thus, respectively, two main sets of challenges.

The assumptions that market prices follow arandom walk and that asset returns are normally distributed are fundamental. Empirical evidence, however, suggests that these assumptions may not hold, and that in practice, traders, analystsand risk managers frequently modify the "standard models" (seekurtosis risk,skewness risk,long tail,model risk). In fact,Benoit Mandelbrot had discovered already in the 1960s[100] that changes in financial prices do not follow anormal distribution, the basis for much option pricing theory, although this observation was slow to find its way into mainstream financial economics.[101]
Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of the above "classical" financial models; whilejump diffusion models allow for (option) pricing incorporating"jumps" in thespot price.[102] Risk managers, similarly, complement (or substitute) the standardvalue at risk models withhistorical simulations,mixture models,principal component analysis,extreme value theory, as well as models forvolatility clustering.[103] For further discussion seeFat-tailed distribution § Applications in economics, andValue at risk § Criticism. Portfolio managers, likewise, have modified their optimization criteria and algorithms; see§ Portfolio theory above.
Closely related is thevolatility smile, where, as above,implied volatility – the volatility corresponding to the BSM price – is observed todiffer as a function ofstrike price (i.e.moneyness), true only if the price-change distribution is non-normal, unlike that assumed by BSM (i.e. and above). The term structure of volatility describes how (implied) volatility differs for related options with different maturities. An impliedvolatility surface is then a three-dimensional plot of volatility smile and term structure. These empirical phenomena negate the assumption of constant volatility – andlog-normality – upon which Black–Scholes is built.[40][102] Within institutions, the function of Black–Scholes is now, largely, tocommunicate prices via implied volatilities, much like bond prices are communicated viaYTM; seeBlack–Scholes model § The volatility smile.
In consequence traders (and risk managers) now, instead, use "smile-consistent" models, firstly, when valuing derivatives not directly mapped to the surface, facilitating the pricing of other, i.e. non-quoted, strike/maturity combinations, or of non-European derivatives, and generally for hedging purposes. The two main approaches arelocal volatility andstochastic volatility. The first returns the volatility which is "local" to each spot-time point of thefinite difference- orsimulation-based valuation; i.e. as opposed to implied volatility, which holds overall. In this way calculated prices – and numeric structures – are market-consistent in an arbitrage-free sense. The second approach assumes that the volatility of the underlying price is a stochastic process rather than a constant. Models here are firstcalibrated to observed prices, and are then applied to the valuation or hedging in question; the most common areHeston,SABR andCEV. This approach addresses certain problems identified with hedging under local volatility.[104]
Related to local volatility are thelattice-basedimplied-binomial and-trinomial trees – essentially a discretization of the approach – which are similarly, but less commonly,[19] used for pricing; these are built on state-prices recovered from the surface.Edgeworth binomial trees allow for a specified (i.e. non-Gaussian)skew andkurtosis in the spot price; priced here, options with differing strikes will return differing implied volatilities, and the tree can be calibrated to the smile as required.[105] Similarly purposed (and derived)closed-form models were also developed.[106]
As discussed, additional to assuming log-normality in returns, "classical" BSM-type models also (implicitly) assume the existence of a credit-risk-free environment, where one can perfectly replicate cashflows so as to fully hedge, and then discount at "the" risk-free-rate. And therefore, post crisis, the various x-value adjustments must be employed, effectively correcting the risk-neutral value forcounterparty- andfunding-related risk.These xVA areadditional to any smile or surface effect: with the surface built on price data for fully-collateralized positions, there is therefore no "double counting" of credit risk (etc.) when appending xVA. (Were this not the case, then each counterparty would have its own surface...)
As mentioned at top, mathematical finance (and particularlyfinancial engineering) is more concerned with mathematical consistency (and market realities) than compatibility with economic theory, and the above "extreme event" approaches, smile-consistent modeling, and valuation adjustments should then be seen in this light. Recognizing this, critics of financial economics - especially vocal since the2008 financial crisis - suggest that instead, the theory needs revisiting almost entirely:[note 17]
The current system, based on the idea that risk is distributed in the shape of a bell curve, is flawed... The problem is [that economists and practitioners] never abandon the bell curve. They are like medieval astronomers who believe the sun revolves around the earth and arefuriously tweaking their geo-centric math in the face of contrary evidence. They will never get this right;they need their Copernicus.[107]
| Market anomalies andeconomic puzzles |
As seen, a common assumption is that financial decision makers act rationally; seeHomo economicus. Recently, however, researchers inexperimental economics andexperimental finance have challenged this assumptionempirically. These assumptions are also challengedtheoretically, bybehavioral finance,[108]a discipline primarily concerned with the limits to rationality of economic agents.[note 18]For related criticisms re corporate finance theory vs its practice see:.[109]
Variousmarket anomalies have been documented in parallel. These comprise price "distortions", e.g.size premiums, or return "predictability"[110] as exemplified by the variouscalendar effects, and are anomalous in the sense that[110] they cannot be explained by traditional economic theories, (apparently) contradicting theefficient-market hypothesis. Related to these are various of theeconomic puzzles, similarly contradicting the theory. Theequity premium puzzle, as one example, arises in that the difference between the observed returns on stocks as compared to government bonds is consistently higher than therisk premium rational equity investors should demand, an "abnormal return". For further context seeRandom walk hypothesis § A non-random walk hypothesis, and sidebar for specific instances.
More generally, and, again, particularly following the2008 financial crisis, financial economics (andmathematical finance) has been subjected to deeper criticism.Notable here isNassim Taleb, whose critique overlaps the above, but extends[111] also to the institutional[112][113]aspects of finance -including academic.[114][40]HisBlack swan theory posits that althoughevents of large magnitude and consequence play a major role in finance, since these are (statistically) unexpected,they are "ignored" by economists and traders (who oftenhave no skin in the game). Thus, although a "Taleb distribution" - which normally provides a payoff of small positive returns, while carrying a small but significant risk of catastrophic losses - more realistically describes markets than current models, the latter continue to be preferred (even withprofessionals here acknowledging that it only "generally works" or only "works on average").[115]
Here,[112]financial crises have been a topic of interest[116] and, in particular,the failure[113] of (financial) economists - as well as[112]bankers andregulators - to model and predict these.SeeSubprime crisis background information § Stages of the crisis.The related problem ofsystemic risk, has also received attention. Where companies hold securities in each other, then this interconnectedness may entail a "valuation chain" – and the performance of one company, or security, here will impact all, a phenomenon not easily modeled, regardless of whether the individual models are correct. See:Systemic risk § Inadequacy of classic valuation models;Cascades in financial networks;Flight-to-quality.
Areas of researchattempting to explain (or at least model) these phenomena, and crises, include[14]market microstructure andHeterogeneous agent models, as above. The latter is extended toagent-based computational models; here,[95] as mentioned, price is treated as anemergent phenomenon, resulting from the interaction of the various market participants (agents). Thenoisy market hypothesis argues that prices can be influenced by speculators andmomentum traders, as well as byinsiders and institutions that often buy and sell stocks for reasons unrelated tofundamental value; seeNoise (economic) andNoise trader. Theadaptive market hypothesis is an attempt to reconcile the efficient market hypothesis with behavioral economics, by applying the principles ofevolution to financial interactions. Aninformation cascade, alternatively, shows market participants engaging in the same acts as others ("herd behavior"), despite contradictions with their private information.Copula-based modelling has similarly been applied. See alsoHyman Minsky's"financial instability hypothesis", as well asGeorge Soros' application of"reflexivity".In the alternative, institutionally inherentlimits to arbitrage - i.e. as opposed to factors directly contradictory to the theory - are sometimes referenced.
Note however, that despite the above inefficiencies, asset prices doeffectively[32] follow a random walk - i.e. (at least) in the sense that "changes in the stock market are unpredictable, lacking any pattern that can be used by an investor to beat the overall market".[117]Thus afterfund costs - and givenother considerations - it is difficult to consistently outperform market averages[118] and achieve"alpha".The practical implication[119] is thatpassive investing, i.e. via low-costindex funds, should, on average, serve better thanany otheractive strategy - and, in fact, this practice isnow widely adopted.[note 19]Here, however, the followingconcern is posited:although in concept, it is "the research undertaken by active managers [that] keeps prices closer to value... [and] thus there is a fragile equilibrium in which some investors choose to index while the rest continue to search for mispriced securities";[119]in practice, as more investors "pour money into index funds tracking the same stocks, valuationsfor those companies become inflated",[120] potentially leading toasset bubbles.
Financial economics
Asset pricing
Corporate finance
{{cite book}}: CS1 maint: multiple names: authors list (link)