Friday, December 27, 2019

Defining Marketing - 1022 Words

To fully understand the importance of marketing and organizational success one must understand what marketing is. Marketing and marketing decisions are the key to an organizations success. Without the marketing process and marketing strategies an organization is sure to fail. To me marketing is the communication of products to a specific target market, and marketing plans are based on the four P’s of the marketing mix: product, place, promotion, and price (Perreault, Cannon, amp; McCarthy, 2011, p. 35). According to the American Marketing Association, â€Å"marketing is the activity, set of institutions, and processes for creating, communicating, delivering, and exchanging offerings that have value for customers, clients, partners, and†¦show more content†¦This is the organizations way to promote their goods and services. The last â€Å"P,† price, is when the organization must determine the right price for their goods or services. The organization must conside r the competition in the area and the cost of the marketing mix. In the marketing mix the P’s are equally important, and they come together to form a complete circle. â€Å"Part of Starbucks’ success comes from adapting its marketing strategy to changing market conditions† (Perreault, Cannon, amp; McCarthy, 2011, p. 3). The economy today challenges companies in today’s society. For an organization to do well, they have to maintain their marketing strategies and plans to change when the market conditions change. Customer service levels dropped so the customers were not getting the experiences that they expected. It was not until Starbucks’ gained more employees and the level of customer service improved that the business began to rise again. Another example is Cirque du Soleil. â€Å"Cirque du Soleil’s marketing managers constantly evaluate new opportunities† (Perreault, Cannon, amp; McCarthy, 2011, p. 31). Cirque du Soleil focuses on developing new shows and the opportunities of expanding their organization in other target markets. Cirque du Soleil is a very successful organization, but they are continually exploring their opportunities to make their customers’ experiences better. The lastShow MoreRelatedDefining Marketing805 Words   |  4 PagesDefining Marketing Colleen P. Dalton MKT/421 November 26, 2012 Stephanie Burns Defining Marketing The purpose of this paper is to define the term â€Å"marketing†, explain the importance of marketing in organizational success, and provide examples from the business world to support the explanation of its importance. Upon completion of this paper it should be understood what Marketing means and its importance in today’s society. Marketing There are many definitions of the term â€Å"marketing†Read MoreDefining Marketing802 Words   |  4 PagesDefining Marketing Deby Chan MKT 421 – Marketing Norbert Gray Jr. July 3, 2011 Defining Marketing There has been much misconception about what marketing really it only about commercials on the television or billboards that dot the highways, advertisements in the paper or salesman attempting to sell you their products. Many believe that this is marketing but marketing is much more complex than the advertising and the selling of goods and services. In fact, the above mentioned elements onlyRead MoreDefining Marketing981 Words   |  4 PagesDefining Marketing What is marketing? More important, what importance does marketing have on an organization s success? In this paper, marketing will be defined. Included will be my personal definition of marketing, the definition found in Marketing Management, and the definition found in Basic Marketing. Based on these definitions, I will explain the importance of marketing in organizational success. Definition of Marketing There are several definitions of marketing. Although many sourcesRead MoreDefining Marketing983 Words   |  4 PagesMarketing has become a part of society and a huge part of the American culture; one has only to observe the timeless art of Andy Warhol and the iconic Campbell’s Soup Can at the National Gallery of Art in Washington, D.C. A personal definition of marketing is creating a product or service that fulfills a need and taking the idea from a concept to the kitchen table. Business Dictionary defines marketing as â€Å"the management process through which goods and services move from concept to the customer†Read MoreDefining Marketing Paper1010 Words   |  5 PagesRunning head: DEFINING MARKETING PAPER Defining Marketing Paper Bonnie Garcia University of Phoenix Marketing is an important part of the business organization; it is more than just promoting and selling a product. Marketing is gratifying the changing needs of the customer. This can be best summed up by the very successful businessman Bill Gates when he quoted, Your most unhappy customers are your greatest source of learning . The purpose of this paper is toRead MoreEssay on Defining Marketing869 Words   |  4 PagesMarketing is one of those things that surrounds your everyday life and you dont even realize it. A challenge of a good marketing manager is to make a person or customer to not even realize that they are targets of a marketing campaign. To define marketing in my own words; giving products a place to perform or show to enhance a buyer. Marketing is a truly important function of business. It is basically the wheels of motion to get a business to profitability. For example, a merchant has a widgetRead MoreDefining Marketing Paper1008 Words   |  5 PagesMarketing Paper Marketing an important part of the business organization, it is more than promoting and selling a product. Marketing is satisfying the changing needs of the customer. The very successful businessman Bill Gates can best sum this up when he said, Your most unhappy customers are your greatest source of learning . The purpose of this paper is to define marketing from at least two different sources, based on these definitions explain the importance of marketing in organizational successRead MoreDefining Marketing - Importance of Marketing in Organizational Success1005 Words   |  5 PagesDefining Marketing Michelle Watson Marketing 421 April 16, 2012 Kim Wm. Houseman Definition of Marketing Marketing is often misunderstood. Ask the average person how they would define marketing and a majority would reply with something along the lines of commercials, ads, brochures, and other items used to market a business. Marketing is complex. It is a process, a practice, and a philosophy. As a process, it moves goods and services from an idea all the way throughRead MoreDefining Marketing for the 21st Century4119 Words   |  17 PagesChapter 1 – Defining Marketing for the Twenty-First Century True/False Questions 1. Marketing is both an art and a science—there is constant tension between the formulated side and the creative side. True (easy) p. 2 AACSB (Reflective Thinking) 2. Large, well known businesses have newly empowered customers, and have had to rethink their business models. True (moderate) p. 2 AACSB (Reflective Thinking) 3. The authors see marketing management as the art and science of proper retailRead MoreDefining Marketing Paper837 Words   |  4 PagesMarketing Marketing is in a part of everyone’s daily lives. We see it the moment we turn on the television, when we go to the grocery store, and even at our jobs. When I think about marketing, I think about all of my favorite restaurants, places to go, favorite clothing brands, and shoes. All of my favorite things come to mind when we discuss the concept of marketing. When I think about marketing, I think of all of the companies that produce my favorite things. Marketing campaigns are those people

Thursday, December 19, 2019

Business Ethics Paper - 997 Words

Business Ethics Paper BUS 415 November 14, 2011 Michael Green Business Ethics Paper The same ethical issues in the business world have been around for a long time. In theory business ethics is a practical regulation that dictates moral activity of commercial interests. The history of business ethics is founded in corporate social responsibility (CSR). Entertainer Shirley Jones filed suit in California against the tabloid company The National Enquirer, whose home office is based in Florida. According to the suit Jones wanted to sue for damages of defamation, invasion of privacy and emotional distress. It will be discussed how a suit filed in one state and the defendant lives in another. It will be discussed the type of paper The†¦show more content†¦It would make sense for The Enquirer to believe that their home state would have personal jurisdiction of the business. The National Enquirer had no way of knowing about the long-arm statute that could be used to allow the courts in California to assert jurisdiction over them in Florida. The Enquirers office in Florida argued against the statute and the arguments were accepted by the lower courts but the arguments were rejected by the California Supreme Court. The Enquirer argued that they were not responsible for the spread of the article in California because they had no stak e economically in the sales of the tabloid in California. But a clause in the Fourteenth Amendment permits personal jurisdiction over a defendant in another state if the defendant as minimum contact in the state (Calder V. Jones, 465 U. S. 783 (1984), 2011). Are the defendants subject to suit in California? The defendant in the case, The National Enquirer, Inc. located in Florida and the plaintiff, actress Shirley Jones, located in California, this issue is the defendant and the plaintiff reside in two different states, so are the defendants subject to suit in California if they reside in Florida? Suits are determined by the courts, if the employees of The Enquirer responsible for the story have minimal contact in the state of the suit by way of the plaintiff Shirley Jones providing proof that The Enquirer or employees of The Enquirer had contact in the state of California while working onShow MoreRelatedBusiness Ethics Paper1837 Words   |  8 PagesUNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS ELDORET WEST CAMPUS (Human Resource option) COURSE CODE: BBM 406 COURSE: BUSINESS ETHICS TASK: Assignment I PRESENTED BY: Jayne Wairimu Njenga ADM NO. BBM/1491/07 PRESENTED TO: ROSE OMONDI Course Tutor DATE: March 2009 Question: Is good Ethics good business? Definition of Business Ethics Business Ethics is a set of moral principles applied in the commercial world. Business ethics provide guidelinesRead MoreA Reflective Paper On Business Ethics1966 Words   |  8 PagesBUSINESS ETHICS Introduction A reflective piece of paper is an essay that has been written with an aim of reflecting essential elements of professional life. It enhances the capacity of evaluating the current knowledge and to understand and accept personal strengths and weaknesses. Reflective writing improves the writer’s critical thinking skills. In this paper, I will handle three different reflective pieces as to why a business ought to be concerned about their social sustainability, how the lackRead MoreReflection Paper On Business Ethics2881 Words   |  12 Pages Arion 1 Anthony Arion Reflection Paper Business Ethics Jeffrey Muldoon, PhD 1 December 2015 Declaring a major in college is a big deal, but declaring that one is majoring in Business gives a whole new meaning to the word. About midway through the class, like a light bulb turned on, the true realization that there is much more to a business when you consider where you want to work or where you want to shop is a direct result of my BU 293: Ethics, Social Responsibility, and SustainabilityRead MorePersonal Worldview Business Ethics Paper1821 Words   |  8 PagesPersonal Worldview amp; Business Ethics Paper Liberty University BMAL 560 January 25, 2015 STEP 1 To what extent should personal religious beliefs impact our decisions about business ethics? Personal religious beliefs should impact all decisions about business ethics. If your beliefs are truly mandated in your life, then you have no choice but to be fair and honest about your business decisions. Business decisions can sometimes be difficult and harsh, but that is no reason to compriseRead MoreApplied Concept Paper: Critical Thinking Structures for Business Ethics3010 Words   |  13 Pages| APPLIED CONCEPT PAPER UNIT A LAURA RUBIDO Z#23124153 MAN 4720-009 PROFESOR: HARRY SCHWARTZ Table of Contents Executive Summary 2 Abstracts 3 Concepts 5 Analysis 6 Conclusion 9 Works Cited 11 Executive Summary The purpose of this paper is to demonstrate my understanding of the previously mentioned fundamental concepts and capability in order to relate them to the actual business world through applications of my critical thinkingRead MoreMonsanto Business Ethics Paper1465 Words   |  6 PagesThis paper about Monsanto Corporations Business Ethics Thesis Statement I will outline some of the ethical issues Monsanto Corporation has faced, I will focus on the predatory litigation of farmers that have been contaminated by Monsanto’s Genetically Modified seed and then sued for using that seed to plant subsequent crops. I will provide examples and citations of what I feel are unethical practices and cases involving Monsanto. Introduction â€Å"The Justice Department is investigating whetherRead MoreBusiness Ethics Research Paper2018 Words   |  9 PagesInsider Trading By Jennifer Miller Instructor Margie Andrist Business Ethics The purpose of this paper is to review the phenomenon of illegal insider trading in the United States financial securities markets. The analysis section of this paper (a) defines illegal insider trading, (b) explains the enforcement of laws and regulations concerning illegal insider trading, (c) review the pattern of illegal insider trading from 1996 through 2005, and (d) compares the problem of illegal insiderRead MoreBusiness Ethics Term Paper2462 Words   |  10 PagesBusiness Ethics Term Paper: Wal-Mart Walmart serves customers and members more than 200 million times per week at more than 9,826 retail units under 60 different banners in 28 countries. With their fiscal year 2010 sales of $405 billion, Walmart employs 2.1 million associates worldwide. Walmart was founded in 1962 by Sam Walton, with the opening of the first Walmart discount store in Rogers, Ark. The company incorporated as Wal-Mart Stores, Inc., on Oct. 31, 1969. The companys shares began tradingRead MoreBusiness Ethic Final Paper1697 Words   |  7 Pagesand have good business ethics also. Toyota business ethics have come into question over its lack of concern for the safety of the consumer and for its desire to make as much money as possible, therefore Toyota had a major recall after owners of Toyota vehicles raised issue about the safety of Toyota’s vehicles. Business Ethics is very important subject to a work place and it demands that a company examines its behavior towards the outside world. â€Å"The field of business ethics deals with questionsRead MoreA Change in Business Ethics: The Impact on Employer–Employee Relations1180 Words   |  5 PagesA Change in Business Ethics: The Impact on Employer–Employee Relations Business ethics is the behavior that a business adheres to in its daily dealings with the world. The ethics of a particular business can be diverse. They apply not only to how the business interacts with the world at large, but also to their one-on-one dealings with a single customer. In the recent decades, business ethics has become the platform on which the whole business rest on. Any disturbance to this base has and will

Wednesday, December 11, 2019

Generic Strategy of Ansoff Matrix and Porter †Free Sample

Question: Describe about Generic Strategy of Ansoff Matrix and Porter? Answer: Introduction The report will discuss in detail about Ansoff Matrix and Porters generic strategy. The report will try to highlight that how Ansoff Matrix is applied on various organisation and what is significance of each matrix for different organisations. Further the report will focus on Porters generic strategies and how these strategies can be beneficial for different type of organisations. Discussion of Ansoff Matrix Penetration: - there is a situation where the company tries to sell the product to the present loyal customers and for that they engage in some strategies like penetration strategy (Jarratt Stiles, 2010). It can be done in various ways like by changing the pricing and also by adding minute factors like new and improved features which will add value to the products or by changing the packaging like sachets of shampoo or by highlighting various other uses of product (Jarratt Stiles, 2010). Here a perfect example can be taken of Cadbury India where the company is pushing the message to buy more chocolates instead of sweets specifically on festivals (Hussain et al, 2013). Product development: - company like McDonalds introduce different variety of cuisine from time to time in order to retain its existing customers and many of the items are pushing the concept of health and fitness for health conscious people (Jarratt Stiles, 2010). For example, McDonald had introduced salads which are not something for which it is known for. However, there was a lot of pressure from the system and also the consumer behaviour changing towards health and fitness, the company had to take the decision in order to develop their product (Jarratt Stiles, 2010). Market development: -market development happens when an existing product is being introduces in the different market (Hussain et al, 2013). This strategy is one of most used strategies in order to extract the all the advantages of that successful products. A perfect example which can be taken here is entering into different geographical area available on national and international level (Hussain et al, 2013). Like Apple has introduced many iPods of different type which is present into the same category. IPod Touch was tried to be made of iPhone and the only difference is that it couldnt make any calls. Diversification: - Diversification is something when a totally new product is introduced by the company in a completely new market and this is termed as diversification (Hussain et al, 2013). Here, iPhone is one of best example which has proved to be most successful diversification present in the market and as per the launch the company tried to target a very large customer group and it is very different from its traditional small market following (Hussain et al, 2013). Infect the CEO of the company Steve Job made sure through his hard work and dedication in creating a contract with the help of many music labels and various artists (Hussain et al, 2013). Generic strategy of Porter According to Porter, the complete framework of strategy has two important components; one is internal and second is external analysis (Porter, 2012). While external analysis builds on economic perspective of the industry structure and how a firm make the most of the competencies of the company. When Porter says that organisation should not get stuck in the middle when it comes to strategy. By this statement he means that a company should compete in all possible ways and the best way to compete is set a strategy and get along with it. The set strategy should be simple and should not be very complex (Porter, 2012). It mainly focuses on hoe an organisation should compete. Porters focus on whole structure of the industry is basically a powerful method which covers analysing the competitive advantage and also building competencies which help the organisation in taking effective decisions and the power of effective decision making only increases in this situation. Though there are many com panies which emphasize on just one type of analysis and also very essential however it is not sufficient to set the company on the right path (Porter, 2012). The actual strategies of the company have to focus on the company most challenging factor and the issues it is facing in various aspects of the organisation. Some of these strategic decisions are more based on the events that are happening in the present and on the other hand there are some which are subject of periodic strategic reviews (Porter, 2012). Now, as far as generic strategies are concerned there are three approached which come under these strategies since they can be applied on products or services and it also can be available for all size of companies (Porter, 2012). Porter named these strategies as cost leader ship which can also be known as no frill strategy, differentiation where the company will have to create an unique or desirable products and services and focus where companies offers a specialised service in a specific niche market (Porter, 2012). Focus strategy can subdivide into further two parts as cost focus and differentiation focus. The main motive of Porters generic strategies is to gain competitive advantage which means the company will focus on developing an edge that helps the company in getting the maximum sales of their product or service and also help in take it away from the competitors. This can be done through two strategies (Porter Lee, 2013). First one is increasing the profits by reducing the costs and also charging prices which are on the basis of average in the industry. Second method is increasing the market share through charging lower prices and increasing the sales. Another strategy defined by Porter is the differentiation strategy which means that the company make its own product or services different from the competitors and also make it more attractive for the customers (Porter Lee, 2013). How company do it completely depend on the nature of the market in which the company is working and it will also involve features, functionality, durability and support. Apart from this, the company also depend on brand image of the customer value (Porter Lee, 2013). Organisation also needs to focus on a strong research which involves development and innovation, the capacity to deliver high quality product or services and effective sales and marketing so that the industry understand the advantages offered by the differentiated company (Porter Lee, 2013). Then there is focus strategy which focus on companies that use this strategy mainly concentrate on specific niche market and also tries to understand the dynamics of the market and the specific needs of the customers within it (Porter Lee, 2013). The company also focus on developing a uniquely low cost and well specified product or services. The main motive of these companies to build a strong brand loyal customers and the company is able to do so since these companies serve customers in a unique manner. This is why this specific market segment is less attractive for the competitors (Rangan et al, 2012). As a part of broad market strategies, it is very important to decide before hand whether the company will focus on cost leadership or on differentiation strategy. So in this strategy the company will either depend on cost focused strategy or on differentiation strategy (Rangan et al, 2012). On the broader basis, the main key is to ensure that the company is adding something extra which as a result serves only that specific market niche. The fact that something extra which can be done through number of ways like reducing costs or by increasing differentiation, it is important to focus on the kind of customers company is serving and the kind of expectation of the customers (Rangan et al, 2012). Porter always maintained in his work of generic strategy that the one thing in which companies need to focus is not to get stuck in the middle when it comes to strategy which means it is very important to choose the right and a perfect generic strategy since the decision to choose a specific type of strategy will help in underpinning every other strategic decision for the company and make it more worthwhile in order to spend right time on right things (Rangan et al, 2012). However, this is a very crucial decision for the company and they need to focus on choosing an appropriate strategy and also avoiding hit and trial method which may result in losing a lot of money and time. Since the generic strategy covers different type of people and all the strategies proposed by the customer covers different market area, it is not advisable to focus on all three types of strategies at the same time (Rangan et al, 2012). Therefore when the company is in the process of choosing three generic stra tegies, it is significant to take into account the competencies of the organisation and its strength into account (Rangan et al, 2012). There are number of steps which can be followed here in this case. The first one is that for each generic strategy the company will have to focus on SWOT analysis where the company can understand its strengths and weaknesses, the opportunities and threats that the company can face if in case they adopted a specific strategy (Rangan et al, 2012). This will help in giving a clear idea to the company and they can easily make the right decision. Second step is to conduct five forces analysis in order to understand the nature and the pattern of the industry in which the company is working (Porter Kramer, 2011). Another step is to make a comparison between the SWOT analyses of the viable strategic options with the result of the five forces analysis of the company and for each option, the company can ask themselves how the specific strategy can be used (P orter Kramer, 2011). There are number of strategies which can be used here like by reducing or managing the power of suppliers or reducing or manage the power of the customers or buyers (Porter Kramer, 2011). Further the company can also come out on the top of rivalry among the competitors and also reduce or eliminate the threat of any kind of substitution and the company can also work on reducing or eliminating the threat of new entry. The company can select the generic strategy that gives the company a strongest way to capture the market in the best possible way (Porter Kramer, 2011). Conclusion The report has discussed in detail about Ansoff Matrix and Porters generic strategy in detail and highlighted the advantages of all the points covered in both the concepts. The report has tried to understand that the basic difference between the companies and how these different companies will have to adopt different strategies in order to sustain in the market for the longest period of time. Reference Jarratt, D., Stiles, D. 2010. How are methodologies and tools framing managers' strategizing practice in competitive strategy development?.British Journal of Management,21(1), 28-43. Hussain, S., Khattak, J., Rizwan, A., Latif, M. A. 2013. ANSOFF Matrix, Environment, and Growth-An Interactive Triangle.Management and Administrative Sciences Review,2(2), 196-206. Porter, D. R. 2012.Managing growth in America's communities. Island Press. Porter, M. E., Lee, T. H. 2013. The Strategy That Wilt Fix Health Care.Harvard Business Review,91(10), 50-+. Rangan, S., Adner, R., Strategy, E. S. 2012. Profits and the Internet: Seven misconceptions.MIT Sloan Management Review. v42 i4,44. Porter, M. E., Kramer, M. R. 2011. Creating shared value.Harvard business review,89(1/2), 62-77.

Tuesday, December 3, 2019

Tan And Wang Essay Research Paper The free essay sample

Tan And Wang Essay, Research Paper The Joy Luck Club The movie and book, The Joy Luck Club, directed by Wayne Wang and written by Amy Tan, severally, although still dejecting at times was a nice going from the blunt decease and devastation featured in the plant we discussed the first half of the semester. The narratives of the eight adult females hit really close to place for me because I besides have a love and hatred relationship with my female parent who lived a much different life turning up than I did. The movie and book are a beautiful jubilation of female parent and girl relationships, the Chinese civilization, and the clang between old traditions and coevalss and new 1s. Many may differ, but I think that the movie did a much better occupation of developing and showing the characters and sharing their ideas than the book did. I have read the book and watched the film several times and the movie gets better each clip while the book becomes more deadening and harder to read all the manner through. We will write a custom essay sample on Tan And Wang Essay Research Paper The or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page It s 2nd nature for me to pull a image in my caput of the universe that is presented in books, but it was a really hard undertaking for me to execute while reading The Joy Luck Club. I attribute this shortage to my deficiency of cognition about China and its people s civilization. The movie showed me the people and the land. It was a batch easier to construe the occurrences and their significances in the movie. The first clip I read the book it was highly hard to maintain up with what was traveling on and who was stating the narrative. It invariably jumped back and Forth between the past and present and between the female parents and girls. It was really frustrating to seek to maintain up and it did non go any easier to understand with perennial readings. By seting distinguishable faces on the characters the movie made tracking the timeline simple. I was able to pass less clip seeking to acquire the facts straight and more clip listening to the of import lessons being told and look up toing the beauty of the filming itself. I besides appreciate the alterations that were made in the version of The Joy Luck Club from the paper to the movie. The nicest alteration was Rose acquiring back together with her hubby, Ted, at the terminal of the movie. Their narrative in the book was left up in the air. There was no verification that they got divorced or that they reconciled their differences and stayed in the matrimony. Changing this result was a good move to do because it tied the stoping into a orderly small bundle. Hollywood I s really good at stoping movies on a happy note and that is what many people are used to seeing. I think that the movie would experience unconcluded if the manager left this narrative hanging because the terminations of the narratives of the other girls did non hold a really distinguishable and fulfilling decision in either the movie or the book. It was sort of Wang to give the audience a small sense of closing. In the instance of Ying Ying St. Clair, I besides think that it was wise for the author s to hold the character drown her babe alternatively of abort it. It developed into a much more dramatic consequence that more people could associate to. I do non believe that Ying Ying would hold been a really sympathetic character in the film if she had an abortion because so many people have a really strong sentiment about the controversial topic. I am non stating that slaying kids is all right ; instead the manner she took her kid s life in the film was credible and apprehensible due to the fortunes. It may even hold been inadvertent and errors are easy excusable. By submerging the babe that she so urgently loved and doted on, the manager efficaciously demonstrated the sheer hurting that Ying Ying was in by holding to be married to an opprobrious, rip offing hubby. Another subject that set The Joy Luck Club subdivision apart from the other plants discussed in this series is the function of work forces. I like that manner one pupil in a category treatment described their function as catalytic. The narratives were entirely about the adult females and the events in their lives are what moved the secret plan. Work force did hold a batch to make with the way that the adult females took, but their emotions and the points of position about the adult females s state of affairss were non relevant. For illustration, Ying Ying St. Clair drowned her boy because her opprobrious hubby drove her to desire retaliation on him, Lena St. Clair had an unloving, obstinate hubby who could non see her, Lindo Jong hated her hubby from their arranged matrimony because he basically took her off from her household, and Rose Hsu lost touch with her psyche because she spent so much clip seeking to delight her hubby and maintain him happy. I prefer the movie to the book, but I found the entireness of the narratives in general fantastic. I have read really few plants written by adult females, for adult females, and about adult females. It was really exciting to read a fresh and watch a film that showed such great penetration into a adult female s bosom and head and that proved that adult females can be beautiful, intelligent, independent, and strong in such a male-dominated universe.

Wednesday, November 27, 2019

Psych Paper Essays - DraftCarson Hill, , Term Papers

Psych Paper My Mom and Dad were divorced when I was one. Dad actually managed to sexually abuse me before the divorce. Karen and Janet, my two older sisters and I went to Dad's on Sundays where we had breakfast. We listened only to classical music, which we hated probably because it was Dad. We did not like him too much, he was different. I had no idea until after he was murdered that he was gay. Well, looking back he was flamboyant, wearing scarves and brooches. He was a gourmet cook and prided himself in the feasts he made for us. My favorite was the crepes drenched in butter and cinnamon sugar. He kept house meticulously, which mirrored his career, a famous art restorer. I never told him I loved him. We had an emotionally distant co-existence. One thing I have held dear like the person in Living through Personal Crisis by Dr. Ann Kaiser Stearns who saved all the clothes of their loved one is a small crystal Easter egg that he gave to me one Easter. It is a symbol of his love, and my valuing it. Mostly, he showed his love through things and outings to plays and musical recitals. Those times were sometimes fun sometimes tedious. But today, I have come to enjoy these types of cultural events. They have helped to shape who I am today. How do you grieve someone you hardly knew, but who is supposed to mean so much I have postponed the grief some what through alcohol and drug use and avoidance. He did mean something to me because when we came home from school, in seventh grade, that day in January, I was shocked when Mom declared, you're father is dead. What do you mean?! What happened?! What do you mean he's dead?! Then the tears started to come and the oh my God's- the utter shock. They told me it was a burglary but that is not what happened. The truth was withheld from me. He was actually taking advantage of two young male prostitutes. Risky behavior, that's for sure. What do you mean male?! What do you mean prostitutes?! I was humiliated! It was years later that I got this news. The whole scene was embarrassing. I thought everyone knew from the newspaper but the whole story was not in the newspaper due to plea bargaining. Back to the seventh grade when this occurred, I was supposed to give a speech dressed as Pocohontas in social studies. Needless to say I missed that one, and subsequently almost failed out of McDonogh that semester. People really don't give enough time for grieving in this society. I needed more time. You would not believe how many times I heard I'm sorry from acquaintances at school. It was too much. It did not help me at all to feel better. No one knew how to listen or even wanted to listen. One girl did ask me how many times he was stabbed. That was really ignorant. I would not have known what to say if someone had listened. But I'm sorry is really useless in helping a person in mourning. Not having any close friends during this time caused me to push my anger down. This began years or depression and suicidal thoughts. An awful lot can happen when one does not deal with pain and loss. My best friend, Ramsey and I did not even talk about the loss of my Dad. However I did find one coping mechanism to further lengthen my grief. It was alcohol. My first drink was with Ramsey at her grandmother's house. It was sweet white wine from my Dad's wine seller. I had no empathic friends at this time in my life, to route for me and help me to talk about my feelings. In middle school, who really has that anyway? It seems that no one I knew talked about problems, nor supported each other except the cheerleaders! The importance of empathic friends in my life today is priceless. I would not do without the recognition of growth, warmth and affection, the reminders of strengths, and the respect of my courage and sense of determination along with all the

Saturday, November 23, 2019

The Significance of Physical Therapy Professor Ramos Blog

The Significance of Physical Therapy What pops into one’s head when thinking of a doctor? Most people say a doctor is the person one goes to visit when they are sick and hands them medicine in order to feel better. What most people may not know is that a Physical Therapist is now required to accomplish a doctorate degree in order to officially become a Doctor of Physical Therapy. From healing the individuals that have broken, fractured or even to helping those with lifelong diseases. Perhaps one of the most important aspects included in this career is the patient. The outcome of whether or not therapy works fluently almost entirely relies on patient participation. Not only are Physical Therapists greatly compensated for their work, but the patient outcome of regaining the strength they once had is perhaps the greatest reward.   Six to eight years is typically what this career entails. Completing such degrees as a Bachelor’s, Master’s as well as a Doctorate. After completing the doctorate degree, one now has the honor of being a Doctor of Physical Therapy (DPT). As well as the education aspect of being a participant of this career path, after completion, one must go through a series of state and federal certification as well as taking a state exam in order to get a state license. Along with the certifications and state license, a national exam is required in order to be a recognized PT. The national exam is called by the Federation of State Boards of Physical Therapy (â€Å"Physical Therapist†¦Ã¢â‚¬ ). After completion of the education required, a new doctor is born. After years of experience, some physical therapists choose to become a board-certified specialist offered by The American Board of Physical Therapy Specialities. A board-certified specialist can specialize in one of nine dif ferent specialties which include sports, orthopedics, and geriatrics. The compensation PT’s receive is quite large considering they are a type of doctor. Physical Therapists usually get paid a median of $91,541 a year. The highest quantity of payment would be as high as $104,437 in the Inglewood, California area (â€Å"Physical Therapist in†¦Ã¢â‚¬ ). Higher compensation would be determined by the wealth class of the area one is working in.   Often times, therapy is thought to treat the injured or hurt. Physical Therapists treat a lot more people than one may think. No one goes without the equal attention from a PT; from the elderly to the hurt to the medically disabled even to individuals with body affecting diseases. The elderly seem to need the most aid from a PT, due to their rapid loss of strength and ability. Regaining strength and muscle is a major part in the rehabilitation process. A patient walking through therapy often times needs more help and support regaining strength than anything else. Most hear or see an individual going to therapy because of something they suffered. Unlike an injury or fracture, a stroke is both serious and harmful event that can cause lifelong defects, or in most cases, the worst punishment of all, death. Individuals with strokes and or minor heart attacks visit a DPT’s office daily. The loss of strength, movement, and guidance often come with a stroke. Strokes are the leading ca use of disability. 75% of the 550,000 individuals who survive a stroke go on to live with varying degrees of impairment or disability (â€Å"Analysis of the Relationship†¦). Perhaps what most of the career consists of is patient participation. â€Å" The goal of a physical therapist is to promote the patients ability to move, reduce pain, restore function, and prevent disability† (Ross). However, This cannot happen if the patient does not go through with his part of the deal. The patient is not only the person who gives the PT work but also is the most important factor when determining the success of the treatment. The experience one has at a PT office does not depend so much on the DPT as it does on the patient. Participation of the patient very often determines the length of the stay, the effectiveness of the stay and the experience of the stay (â€Å"Significance of†¦Ã¢â‚¬ ). Whether it be a good or bad experience the therapists cannot do much for an individual if they do not participate. According to recent studies done by the US Bureau of Labor Statistics, Physical Therapy is in good hands in terms of future jobs. Between 2014 and 2024, Physical Therapist jobs will skyrocket by 34% . Approximately 210,900 licenced PTs are currently employed. That number will increase to an astonishing 282,700 by the year 2024. (Ross) Physical Therapy is not only well recognized for their work in the field of medicine, but has also been recognized nationally by mainstream media. Big names such as Forbes and CNN took some time to polish up the career of Physical Therapy in the media. â€Å"Forbes ranked physical therapists as having 1 of The Ten Happiest Jobs, according to articles published in 2013 and 2011. CNNMoney.com gave physical therapists a grade of â€Å"A† in Personal Satisfaction in 2012, as well as in its â€Å"Benefit to Society† categories.† As if the media polishing was not enough, more than three quarters of Physical Therapists polled to be â€Å" very satisfied† with their occupation (Ross). However, according to DPT Peter Christakos expresses his opinion towards the rapidly increasing profession. He describes the altering of PT class sizes in order to fulfill the fast growing student clusters. Christakos goes on to compare the profession of Physical Therapy to a bubble. The significance of a larger class size to the ongoing growth class sizes comes without saying. However, Peter does give a valid point when expressing that Physical Therapists hold the future of the profession in their hands. The supply and demand curve of future jobs in the field is meant to be untouched by PTs (Christakos). By increasing class volumes, the supply demand would be shooting up, leaving demand to catch up by itself. Christakos sketches the bubble of the profession,and asks â€Å"Will we [PTs] let it burst?† (Christakos) Having the opportunity to change one’s life go many ways. Physical Therapists aid those in need to positively impact their life. The hefty compensation goes without saying when speaking in terms of the patient’s progress and accomplishments during the rehabilitation process. The outcome does in fact affect the outcome of the treatment. Unlike other occupations, PTs can not do much for an individual if the patient does not cooperate. A Doctor of Physical Therapist plays a major part in the world of health care. The regaining of strength and ability of an individual who was once as strong as an ox   could not be done without a DPT. Christakos, Peter. â€Å"When Will the Bubble Burst?† PT in Motion. http://web.a.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=5sid=1241b402-b590-43cd -ade4-b3de755e27db%40sdc-v-sessmgr03 . 23 July 2019 K, Janet. â€Å"Analysis of the Relationship Between the Utilization of Physical Therapy Services and Outcomes for Patients With Acute Stroke.† OUP Academic, Oxford University Press, 1  Oct. 1999, www.academic.oup.com/ptj/article/79/10/906/2842426 . Accessed 23 July 2019 â€Å"Physical Therapist Salary in Inglewood, CA.† Salary.com,  www.salary.com/research/salary/benchmark/physical-therapist-salary/inglewood-ca?personalized. Accessed 23 July 2019 â€Å"Physical Therapists : Occupational Outlook Handbook:† U.S. Bureau of Labor Statistics, U.S.  Bureau of Labor Statistics, www.bls.gov/ooh/healthcare/physical-therapists.htm. Accessed 23 July 2019 Ross, Libby. â€Å"Benefits of a Physical Therapist Career.† APTA,  www.apta.org/PTCareers/Benefits/. Accessed 23 July 2019 â€Å"Significance of Poor Patient Participation in Physical and Occupational Therapy for Functional  Outcome and Length of Stay.† Archives of Physical Medicine and Rehabilitation, W.B.   www.sciencedirect.com/science/article/abs/pii/S0003999304004307. 23 July 2019

Thursday, November 21, 2019

Did Moses Write the Pentateuch or the Book of Moses in the Bible Research Paper

Did Moses Write the Pentateuch or the Book of Moses in the Bible - Research Paper Example Pentateuch contain the laws and instructions of God given to the people of Israel through Moses, hence Pentateuch’s other name â€Å"Book of Moses†. In the Pentateuch, the Israelites were appointed as the chosen people of God and the beneficiary of the Ark of Covenant and laid down the foundation of the coming of the Messiah in the presence of Jesus Christ. II. Passages in the Bible that suggests Moses authorship of the Pentateuch There are several passages in the Pentateuch and the Bible that led to the initial conclusion that indeed Moses wrote the entire body of the Pentateuch. ... .'" Matthew 22:24  "Moses said, 'If a man dies without children...'" Mark 7:10  "For instance, Moses gave you this law from God..." Mark 12:24  "...haven't you ever read about this in the writings of Moses, in the story of the burning bush..." Luke 24:44  "...I told you that everything written about me by Moses and the prophets and in the Psalms must all come true." John 1:17  "For the law was given through Moses..." John 5:46  "But if you had believed Moses, you would have believed me because he wrote about me. And since you don't believe what he wrote, how will you believe what I say?" John 7:23  "...do it, so as not to break the law of Moses..." Acts 26:22  "...I teach nothing except what the prophets and Moses said would happen..." Romans 10:5  "For Moses wrote..." III. Was the Pentateuch a work of a single author (by Moses) or an anthology of diverse material? It is easy to conclude that the first five books of the Bible were written by Moses given the above Bi blical passages suggestion that Moses wrote the entire Pentateuch. Also, the Books were attributed to him not to mention that he was a central figure to it. A close examination on the Pentateuch by scholars beginning in the eighteenth century however led them to conclude that the Pentateuch is not written by a single author, or by Moses alone as the traditional thinking suggests, but rather an anthology of diverse materials. Evidences that Pentateuch is not written by a single author When critical literary analysis was applied to the Pentateuch, it was found that the five books contained numerous duplications, broad diversity of writing style and even contrasting view points. The discovery of the duplication of the texts in the body of Pentateuch led scholars to study that the first five

Wednesday, November 20, 2019

Qualitative Research in management Essay Example | Topics and Well Written Essays - 2500 words

Qualitative Research in management - Essay Example This paper will begin with An Overview of Qualitative Research. There are generally two types of researches i-e., quantitative and qualitative research. Quantitative research is structured methods aiming at quantifying the data using the statistical method. They designed to prove reliability, generalizability, and objectivity. Qualitative research on the other hand, is unstructured methods seeking to give insights and understanding of problems. These two types of research are based on different concept. For instance, qualitative research is based on social sciences trying to understand and explain behaviors in particular situations while quantitative research evolved in natural since seeking to find commonly laws, which show the relationship of cause and effect. Qualitative research is a method of social study that focuses on how people think, live, and behaves. It is used in different academic disciplines as well as in social science. In addition, it is also used to gain a depth und erstanding of people attitudes, culture, feelings, values and interests and their social reality as individuals or groups. Marshall and Rossman define qualitative research as â€Å"a form of social inquiry that focuses on the way people interpret and make sense of their experiences and the world in which they live. The decision to use qualitative or quantitative research depends on the nature of issue under investigation. For example, if research aims to investigate the effect of credit supply shocks on firms financial and investment decision, then quantitative research would be more appropriate.... Marshall and Rossman (1998) define qualitative research as â€Å"a form of social inquiry that focuses on the way people interpret and make sense of their experiences and the world in which they live. The decision to use qualitative or quantitative research depends on the nature of issue under investigation. For example, if research aims to investigate the effect of credit supply shocks on firms financial and investment decision, then quantitative research would be more appropriate. However, if the objective were to explore how people respond to government announcement of cutting jobs, then qualitative research would be the best in that case. Therefore, the question of which approach is good for the study depend on the nature of the subject. Although both qualitative and quantitative research has advantages and disadvantages but qualitative research is believed to provide very rich data for analysis. The study by Punch (2005) highlights that qualitative research has advantages of be ing explorative in nature. It is because it allows researchers to explore new ideas, concepts and get new insights. There is also consensus among researchers that it helps in gathering the data in natural and reliable setting, which is not possible in quantitative research. In addition, as qualitative research focus on individuals, group etc., therefore, it helps to gain detailed and complex information about the phenomena under study. It may be because of these advantages that lead researchers to pursue qualitative research especially in social science or when the subject of study is human being (Mack et al, 2005). As mentioned earlier, that qualitative research

Sunday, November 17, 2019

The buying back of shares is a dangerous financial strategy as it Essay

The buying back of shares is a dangerous financial strategy as it increases the company's capital gearing. Evaluate this - Essay Example There are different motives that would attract the companies to buy back the shares and there are different techniques that can be used to go through the process of stock repurchase. Different techniques that can are used by the companies for their stock buy-back are as follow: Company offers to purchase the shares from their shareholders at a premium price thus it gives value to them and extra return over price they actually had paid for the shares when they were bought. Companies often buy back their shares from the open market like an ordinary investor purchasing shares and making investment. It is often seen that the market and shareholders perceive the decision of the company to buy back the shares as a positive move and shareholders expecting higher returns stimulates stock price of the company (Larry, 1981). Motives for stock buyback Different circumstances and requirements of business conditions can influence management of share repurchase. Such motivating factors along with their reasons are discussed below: Market perception It is the perception of the shareholders and potential investors that exists in the market matters for future of company. Company is believed to use capital or extra finance available to them to buy back its shares thus giving the perception in market that there shareholders would be valued as they are provided the opportunity to trade possessed shares at the premium price (Udo & Richard, 2003). Thus removing any negative market perceptions that the stock price of the company has fallen and they have low future expectancy that what effects dealing of shares in market. It is often due to low earnings reported by the company in past some period, its operations effected by some scandal or lawsuit thus the share buyback is used as an option to remove any negative perceptions that are prevailing regarding the company in the market (David, et al., 1995). It is becomes necessary for the company to make the share buyback as market due to such instances and incidents might value the share price way low and shares are being traded at value that is below the expectancy of company thus in order to keep a standard for their shares in market and keeping value for their shareholders alive however it is believed that hike in share prices through this approach is of nominal period (Mansor, et al., 2011). Financial Ratios It is a usual practice in the market adopted by the investors before making any investment they make decisions on the basis of research and evaluation of the companies that are seem potential for the investment. Financial ratios of the company are most basic and foremost results that are used for the evaluation of the company. It is part of rational decision making of the investor as they evaluate their choice of investment before making the final decision (Amy, 2000). Thus share buyback can be the part of an accounting technique to get the desired results for the company as however it is the personal financ e of the company that they utilize to buy-back the shares thus it is confidence that the companies have on their abilities that makes them repurchase the outstanding shares that are either absorbed or turned to treasury stock. Thus the purchase reduces assets of company as it is the cash that is being paid for purchase of the shares therefore one of most important

Friday, November 15, 2019

VaR Models in Predicting Equity Market Risk

VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar VaR Models in Predicting Equity Market Risk VaR Models in Predicting Equity Market Risk Chapter 3 Research Design This chapter represents how to apply proposed VaR models in predicting equity market risk. Basically, the thesis first outlines the collected empirical data. We next focus on verifying assumptions usually engaged in the VaR models and then identifying whether the data characteristics are in line with these assumptions through examining the observed data. Various VaR models are subsequently discussed, beginning with the non-parametric approach (the historical simulation model) and followed by the parametric approaches under different distributional assumptions of returns and intentionally with the combination of the Cornish-Fisher Expansion technique. Finally, backtesting techniques are employed to value the performance of the suggested VaR models. 3.1. Data The data used in the study are financial time series that reflect the daily historical price changes for two single equity index assets, including the FTSE 100 index of the UK market and the SP 500 of the US market. Mathematically, instead of using the arithmetic return, the paper employs the daily log-returns. The full period, which the calculations are based on, stretches from 05/06/2002 to 22/06/2009 for each single index. More precisely, to implement the empirical test, the period will be divided separately into two sub-periods: the first series of empirical data, which are used to make the parameter estimation, spans from 05/06/2002 to 31/07/2007. The rest of the data, which is between 01/08/2007 and 22/06/2009, is used for predicting VaR figures and backtesting. Do note here is that the latter stage is exactly the current global financial crisis period which began from the August of 2007, dramatically peaked in the ending months of 2008 and signally reduced significantly in the middle of 2009. Consequently, the study will purposely examine the accuracy of the VaR models within the volatile time. 3.1.1. FTSE 100 index The FTSE 100 Index is a share index of the 100 most highly capitalised UK companies listed on the London Stock Exchange, began on 3rd January 1984. FTSE 100 companies represent about 81% of the market capitalisation of the whole London Stock Exchange and become the most widely used UK stock market indicator. In the dissertation, the full data used for the empirical analysis consists of 1782 observations (1782 working days) of the UK FTSE 100 index covering the period from 05/06/2002 to 22/06/2009. 3.1.2. SP 500 index The SP 500 is a value weighted index published since 1957 of the prices of 500 large-cap common stocks actively traded in the United States. The stocks listed on the SP 500 are those of large publicly held companies that trade on either of the two largest American stock market companies, the NYSE Euronext and NASDAQ OMX. After the Dow Jones Industrial Average, the SP 500 is the most widely followed index of large-cap American stocks. The SP 500 refers not only to the index, but also to the 500 companies that have their common stock included in the index and consequently considered as a bellwether for the US economy. Similar to the FTSE 100, the data for the SP 500 is also observed during the same period with 1775 observations (1775 working days). 3.2. Data Analysis For the VaR models, one of the most important aspects is assumptions relating to measuring VaR. This section first discusses several VaR assumptions and then examines the collected empirical data characteristics. 3.2.1. Assumptions 3.2.1.1. Normality assumption Normal distribution As mentioned in the chapter 2, most VaR models assume that return distribution is normally distributed with mean of 0 and standard deviation of 1 (see figure 3.1). Nonetheless, the chapter 2 also shows that the actual return in most of previous empirical investigations does not completely follow the standard distribution. Figure 3.1: Standard Normal Distribution Skewness The skewness is a measure of asymmetry of the distribution of the financial time series around its mean. Normally data is assumed to be symmetrically distributed with skewness of 0. A dataset with either a positive or negative skew deviates from the normal distribution assumptions (see figure 3.2). This can cause parametric approaches, such as the Riskmetrics and the symmetric normal-GARCH(1,1) model under the assumption of standard distributed returns, to be less effective if asset returns are heavily skewed. The result can be an overestimation or underestimation of the VaR value depending on the skew of the underlying asset returns. Figure 3.2: Plot of a positive or negative skew Kurtosis The kurtosis measures the peakedness or flatness of the distribution of a data sample and describes how concentrated the returns are around their mean. A high value of kurtosis means that more of data’s variance comes from extreme deviations. In other words, a high kurtosis means that the assets returns consist of more extreme values than modeled by the normal distribution. This positive excess kurtosis is, according to Lee and Lee (2000) called leptokurtic and a negative excess kurtosis is called platykurtic. The data which is normally distributed has kurtosis of 3. Figure 3.3: General forms of Kurtosis Jarque-Bera Statistic In statistics, Jarque-Bera (JB) is a test statistic for testing whether the series is normally distributed. In other words, the Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness. The test statistic JB is defined as: where n is the number of observations, S is the sample skewness, K is the sample kurtosis. For large sample sizes, the test statistic has a Chi-square distribution with two degrees of freedom. Augmented Dickey–Fuller Statistic Augmented Dickey–Fuller test (ADF) is a test for a unit root in a time series sample. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models. The ADF statistic used in the test is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. ADF critical values: (1%) –3.4334, (5%) –2.8627, (10%) –2.5674. 3.2.1.2. Homoscedasticity assumption Homoscedasticity refers to the assumption that the dependent variable exhibits similar amounts of variance across the range of values for an independent variable. Figure 3.4: Plot of Homoscedasticity Unfortunately, the chapter 2, based on the previous empirical studies confirmed that the financial markets usually experience unexpected events, uncertainties in prices (and returns) and exhibit non-constant variance (Heteroskedasticity). Indeed, the volatility of financial asset returns changes over time, with periods when volatility is exceptionally high interspersed with periods when volatility is unusually low, namely volatility clustering. It is one of the widely stylised facts (stylised statistical properties of asset returns) which are common to a common set of financial assets. The volatility clustering reflects that high-volatility events tend to cluster in time. 3.2.1.3. Stationarity assumption According to Cont (2001), the most essential prerequisite of any statistical analysis of market data is the existence of some statistical properties of the data under study which remain constant over time, if not it is meaningless to try to recognize them. One of the hypotheses relating to the invariance of statistical properties of the return process in time is the stationarity. This hypothesis assumes that for any set of time instants ,†¦, and any time interval the joint distribution of the returns ,†¦, is the same as the joint distribution of returns ,†¦,. The Augmented Dickey-Fuller test, in turn, will also be used to test whether time-series models are accurately to examine the stationary of statistical properties of the return. 3.2.1.4. Serial independence assumption There are a large number of tests of randomness of the sample data. Autocorrelation plots are one common method test for randomness. Autocorrelation is the correlation between the returns at the different points in time. It is the same as calculating the correlation between two different time series, except that the same time series is used twice once in its original form and once lagged one or more time periods. The results can range from  +1 to -1. An autocorrelation of  +1 represents perfect positive correlation (i.e. an increase seen in one time series will lead to a proportionate increase in the other time series), while a value of -1 represents perfect negative correlation (i.e. an increase seen in one time series results in a proportionate decrease in the other time series). In terms of econometrics, the autocorrelation plot will be examined based on the Ljung-Box Q statistic test. However, instead of testing randomness at each distinct lag, it tests the overall randomness based on a number of lags. The Ljung-Box test can be defined as: where n is the sample size,is the sample autocorrelation at lag j, and h is the number of lags being tested. The hypothesis of randomness is rejected if whereis the percent point function of the Chi-square distribution and the ÃŽ ± is the quantile of the Chi-square distribution with h degrees of freedom. 3.2.2. Data Characteristics Table 3.1 gives the descriptive statistics for the FTSE 100 and the SP 500 daily stock market prices and returns. Daily returns are computed as logarithmic price relatives: Rt = ln(Pt/pt-1), where Pt is the closing daily price at time t. Figures 3.5a and 3.5b, 3.6a and 3.6b present the plots of returns and price index over time. Besides, Figures 3.7a and 3.7b, 3.8a and 3.8b illustrate the combination between the frequency distribution of the FTSE 100 and the SP 500 daily return data and a normal distribution curve imposed, spanning from 05/06/2002 through 22/06/2009. Table 3.1: Diagnostics table of statistical characteristics on the returns of the FTSE 100 Index and SP 500 index between 05/06/2002 and 22/6/2009. DIAGNOSTICS SP 500 FTSE 100 Number of observations 1774 1781 Largest return 10.96% 9.38% Smallest return -9.47% -9.26% Mean return -0.0001 -0.0001 Variance 0.0002 0.0002 Standard Deviation 0.0144 0.0141 Skewness -0.1267 -0.0978 Excess Kurtosis 9.2431 7.0322 Jarque-Bera 694.485*** 2298.153*** Augmented Dickey-Fuller (ADF) 2 -37.6418 -45.5849 Q(12) 20.0983* Autocorre: 0.04 93.3161*** Autocorre: 0.03 Q2 (12) 1348.2*** Autocorre: 0.28 1536.6*** Autocorre: 0.25 The ratio of SD/mean 144 141 Note: 1. *, **, and *** denote significance at the 10%, 5%, and 1% levels, respectively. 2. 95% critical value for the augmented Dickey-Fuller statistic = -3.4158 Figure 3.5a: The FTSE 100 daily returns from 05/06/2002 to 22/06/2009 Figure 3.5b: The SP 500 daily returns from 05/06/2002 to 22/06/2009 Figure 3.6a: The FTSE 100 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.6b: The SP 500 daily closing prices from 05/06/2002 to 22/06/2009 Figure 3.7a: Histogram showing the FTSE 100 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.7b: Histogram showing the SP 500 daily returns combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8a: Diagram showing the FTSE 100’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 Figure 3.8b: Diagram showing the SP 500’ frequency distribution combined with a normal distribution curve, spanning from 05/06/2002 through 22/06/2009 The Table 3.1 shows that the FTSE 100 and the SP 500 average daily return are approximately 0 percent, or at least very small compared to the sample standard deviation (the standard deviation is 141 and 144 times more than the size of the average return for the FTSE 100 and SP 500, respectively). This is why the mean is often set at zero when modelling daily portfolio returns, which reduces the uncertainty and imprecision of the estimates. In addition, large standard deviation compared to the mean supports the evidence that daily changes are dominated by randomness and small mean can be disregarded in risk measure estimates. Moreover, the paper also employes five statistics which often used in analysing data, including Skewness, Kurtosis, Jarque-Bera, Augmented Dickey-Fuller (ADF) and Ljung-Box test to examining the empirical full period, crossing from 05/06/2002 through 22/06/2009. Figure 3.7a and 3.7b demonstrate the histogram of the FTSE 100 and the SP 500 daily return data with the normal distribution imposed. The distribution of both the indexes has longer, fatter tails and higher probabilities for extreme events than for the normal distribution, in particular on the negative side (negative skewness implying that the distribution has a long left tail). Fatter negative tails mean a higher probability of large losses than the normal distribution would suggest. It is more peaked around its mean than the normal distribution, Indeed, the value for kurtosis is very high (10 and 12 for the FTSE 100 and the SP 500, respectively compared to 3 of the normal distribution) (also see Figures 3.8a and 3.8b for more details). In other words, the most prominent deviation from the normal distributional assumption is the kurtosis, which can be seen from the middle bars of the histogram rising above the normal distribution. Moreover, it is obvious that outliers still exist, which indicates that excess kurtosis is still present. The Jarque-Bera test rejects normality of returns at the 1% level of significance for both the indexes. So, the samples have all financial characteristics: volatility clustering and leptokurtosis. Besides that, the daily returns for both the indexes (presented in Figure 3.5a and 3.5b) reveal that volatility occurs in bursts; particularly the returns were very volatile at the beginning of examined period from June 2002 to the middle of June 2003. After remaining stable for about 4 years, the returns of the two well-known stock indexes in the world were highly volatile from July 2007 (when the credit crunch was about to begin) and even dramatically peaked since July 2008 to the end of June 2009. Generally, there are two recognised characteristics of the collected daily data. First, extreme outcomes occur more often and are larger than that predicted by the normal distribution (fat tails). Second, the size of market movements is not constant over time (conditional volatility). In terms of stationary, the Augmented Dickey-Fuller is adopted for the unit root test. The null hypothesis of this test is that there is a unit root (the time series is non-stationary). The alternative hypothesis is that the time series is stationary. If the null hypothesis is rejected, it means that the series is a stationary time series. In this thesis, the paper employs the ADF unit root test including an intercept and a trend term on return. The results from the ADF tests indicate that the test statistis for the FTSE 100 and the SP 500 is -45.5849 and -37.6418, respectively. Such values are significantly less than the 95% critical value for the augmented Dickey-Fuller statistic (-3.4158). Therefore, we can reject the unit root null hypothesis and sum up that the daily return series is robustly stationary. Finally, Table 3.1 shows the Ljung-Box test statistics for serial correlation of the return and squared return series for k = 12 lags, denoted by Q(k) and Q2(k), respectively. The Q(12) statistic is statistically significant implying the present of serial correlation in the FTSE 100 and the SP 500 daily return series (first moment dependencies). In other words, the return series exhibit linear dependence. Figure 3.9a: Autocorrelations of the FTSE 100 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figure 3.9b: Autocorrelations of the SP 500 daily returns for Lags 1 through 100, covering 05/06/2002 to 22/06/2009. Figures 3.9a and 3.9b and the autocorrelation coefficient (presented in Table 3.1) tell that the FTSE 100 and the SP 500 daily return did not display any systematic pattern and the returns have very little autocorrelations. According to Christoffersen (2003), in this situation we can write: Corr(Rt+1,Rt+1-ÃŽ ») ≈ 0, for ÃŽ » = 1,2,3†¦, 100 Therefore, returns are almost impossible to predict from their own past. One note is that since the mean of daily returns for both the indexes (-0.0001) is not significantly different from zero, and therefore, the variances of the return series are measured by squared returns. The Ljung-Box Q2 test statistic for the squared returns is much higher, indicating the presence of serial correlation in the squared return series. Figures 3.10a and 3.10b) and the autocorrelation coefficient (presented in Table 3.1) also confirm the autocorrelations in squared returns (variances) for the FTSE 100 and the SP 500 data, and more importantly, variance displays positive correlation with its own past, especially with short lags. Corr(R2t+1,R2t+1-ÃŽ ») > 0, for ÃŽ » = 1,2,3†¦, 100 Figure 3.10a: Autocorrelations of the FTSE 100 squared daily returns Figure 3.10b: Autocorrelations of the SP 500 squared daily returns 3.3. Calculation of Value At Risk The section puts much emphasis on how to calculate VaR figures for both single return indexes from proposed models, including the Historical Simulation, the Riskmetrics, the Normal-GARCH(1,1) (or N-GARCH(1,1)) and the Student-t GARCH(1,1) (or t-GARCH(1,1)) model. Except the historical simulation model which does not make any assumptions about the shape of the distribution of the assets returns, the other ones commonly have been studied under the assumption that the returns are normally distributed. Based on the previous section relating to the examining data, this assumption is rejected because observed extreme outcomes of the both single index returns occur more often and are larger than predicted by the normal distribution. Also, the volatility tends to change through time and periods of high and low volatility tend to cluster together. Consequently, the four proposed VaR models under the normal distribution either have particular limitations or unrealistic. Specifically, the historical simulation significantly assumes that the historically simulated returns are independently and identically distributed through time. Unfortunately, this assumption is impractical due to the volatility clustering of the empirical data. Similarly, although the Riskmetrics tries to avoid relying on sample observations and make use of additional information contained in the assumed distribution function, its normally distributional assumption is also unrealistic from the results of examining the collected data. The normal-GARCH(1,1) model and the student-t GARCH(1,1) model, on the other hand, can capture the fat tails and volatility clustering which occur in the observed financial time series data, but their returns standard distributional assumption is also impossible comparing to the empirical data. Despite all these, the thesis still uses the four models under the standard distributional assumption of returns to comparing and evaluating their estimated results with the predicted results based on the student distributional assumption of returns. Besides, since the empirical data experiences fatter tails more than that of the normal distribution, the essay intentionally employs the Cornish-Fisher Expansion technique to correct the z-value from the normal distribution to account for fatter tails, and then compare these results with the two results above. Therefore, in this chapter, we purposely calculate VaR by separating these three procedures into three different sections and final results will be discussed in length in chapter 4. 3.3.1. Components of VaR measures Throughout the analysis, a holding period of one-trading day will be used. For the significance level, various values for the left tail probability level will be considered, ranging from the very conservative level of 1 percent to the mid of 2.5 percent and to the less cautious 5 percent. The various VaR models will be estimated using the historical data of the two single return index samples, stretches from 05/06/2002 through 31/07/2007 (consisting of 1305 and 1298 prices observations for the FTSE 100 and the SP 500, respectively) for making the parameter estimation, and from 01/08/2007 to 22/06/2009 for predicting VaRs and backtesting. One interesting point here is that since there are few previous empirical studies examining the performance of VaR models during periods of financial crisis, the paper deliberately backtest the validity of VaR models within the current global financial crisis from the beginning in August 2007. 3.3.2. Calculation of VaR 3.3.2.1. Non-parametric approach Historical Simulation As mentioned above, the historical simulation model pretends that the change in market factors from today to tomorrow will be the same as it was some time ago, and therefore, it is computed based on the historical returns distribution. Consequently, we separate this non-parametric approach into a section. The chapter 2 has proved that calculating VaR using the historical simulation model is not mathematically complex since the measure only requires a rational period of historical data. Thus, the first task is to obtain an adequate historical time series for simulating. There are many previous studies presenting that predicted results of the model are relatively reliable once the window length of data used for simulating daily VaRs is not shorter than 1000 observed days. In this sense, the study will be based on a sliding window of the previous 1305 and 1298 prices observations (1304 and 1297 returns observations) for the FTSE 100 and the SP 500, respectively, spanning from 05/06/2002 through 31/07/2007. We have selected this rather than larger windows is since adding more historical data means adding older historical data which could be irrelevant to the future development of the returns indexes. After sorting in ascending order the past returns attributed to equally spaced classes, the predicted VaRs are determined as that log-return lies on the target percentile, say, in the thesis is on three widely percentiles of 1%, 2.5% and 5% lower tail of the return distribution. The result is a frequency distribution of returns, which is displayed as a histogram, and shown in Figure 3.11a and 3.11b below. The vertical axis shows the number of days on which returns are attributed to the various classes. The red vertical lines in the histogram separate the lowest 1%, 2.5% and 5% returns from the remaining (99%, 97.5% and 95%) returns. For FTSE 100, since the histogram is drawn from 1304 daily returns, the 99%, 97.5% and 95% daily VaRs are approximately the 13th, 33rd and 65th lowest return in this dataset which are -3.2%, -2.28% and -1.67%, respectively and are roughly marked in the histogram by the red vertical lines. The interpretation is that the VaR gives a number such that there is, say, a 1% chance of losing more than 3.2% of the single asset value tomorrow (on 01st August 2007). The SP 500 VaR figures, on the other hand, are little bit smaller than that of the UK stock index with -2.74%, -2.03% and -1.53% corresponding to 99%, 97.5% and 95% confidence levels, respectively. Figure 3.11a: Histogram of daily returns of FTSE 100 between 05/06/2002 and 31/07/2007 Figure 3.11b: Histogram of daily returns of SP 500 between 05/06/2002 and 31/07/2007 Following predicted VaRs on the first day of the predicted period, we continuously calculate VaRs for the estimated period, covering from 01/08/2007 to 22/06/2009. The question is whether the proposed non-parametric model is accurately performed in the turbulent period will be discussed in length in the chapter 4. 3.3.2.2. Parametric approaches under the normal distributional assumption of returns This section presents how to calculate the daily VaRs using the parametric approaches, including the RiskMetrics, the normal-GARCH(1,1) and the student-t GARCH(1,1) under the standard distributional assumption of returns. The results and the validity of each model during the turbulent period will deeply be considered in the chapter 4. 3.3.2.2.1. The RiskMetrics Comparing to the historical simulation model, the RiskMetrics as discussed in the chapter 2 does not solely rely on sample observations; instead, they make use of additional information contained in the normal distribution function. All that needs is the current estimate of volatility. In this sense, we first calculate daily RiskMetrics variance for both the indexes, crossing the parameter estimated period from 05/06/2002 to 31/07/2007 based on the well-known RiskMetrics variance formula (2.9). Specifically, we had the fixed decay factor ÃŽ »=0.94 (the RiskMetrics system suggested using ÃŽ »=0.94 to forecast one-day volatility). Besides, the other parameters are easily calculated, for instance, and are the squared log-return and variance of the previous day, correspondingly. After calculating the daily variance, we continuously measure VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under different confidence levels of 99%, 97.5% and 95% based on the normal VaR formula (2.6), where the critical z-value of the normal distribution at each significance level is simply computed using the Excel function NORMSINV. 3.3.2.2.2. The Normal-GARCH(1,1) model For GARCH models, the chapter 2 confirms that the most important point is to estimate the model parameters ,,. These parameters has to be calculated for numerically, using the method of maximum likelihood estimation (MLE). In fact, in order to do the MLE function, many previous studies efficiently use professional econometric softwares rather than handling the mathematical calculations. In the light of evidence, the normal-GARCH(1,1) is executed by using a well-known econometric tool, STATA, to estimate the model parameters (see Table 3.2 below). Table 3.2. The parameters statistics of the Normal-GARCH(1,1) model for the FTSE 100 and the SP 500 Normal-GARCH(1,1)* Parameters FTSE 100 SP 500 0.0955952 0.0555244 0.8907231 0.9289999 0.0000012 0.0000011 + 0.9863183 0.9845243 Number of Observations 1304 1297 Log likelihood 4401.63 4386.964 * Note: In this section, we report the results from the Normal-GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the normal distribution with significance level of 5%. According to Table 3.2, the coefficients of the lagged squared returns () for both the indexes are positive, concluding that strong ARCH effects are apparent for both the financial markets. Also, the coefficients of lagged conditional variance () are significantly positive and less than one, indicating that the impact of ‘old’ news on volatility is significant. The magnitude of the coefficient, is especially high (around 0.89 – 0.93), indicating a long memory in the variance. The estimate of was 1.2E-06 for the FTSE 100 and 1.1E-06 for the SP 500 implying a long run standard deviation of daily market return of about 0.94% and 0.84%, respectively. The log-likehood for this model for both the indexes was 4401.63 and 4386.964 for the FTSE 100 and the SP 500, correspondingly. The Log likehood ratios rejected the hypothesis of normality very strongly. After calculating the model parameters, we begin measuring conditional variance (volatility) for the parameter estimated period, covering from 05/06/2002 to 31/07/2007 based on the conditional variance formula (2.11), where and are the squared log-return and conditional variance of the previous day, respectively. We then measure predicted daily VaRs for the forecasting period from 01/08/2007 to 22/06/2009 under confidence levels of 99%, 97.5% and 95% using the normal VaR formula (2.6). Again, the critical z-value of the normal distribution under significance levels of 1%, 2.5% and 5% is purely computed using the Excel function NORMSINV. 3.3.2.2.3. The Student-t GARCH(1,1) model Different from the Normal-GARCH(1,1) approach, the model assumes that the volatility (or the errors of the returns) follows the Student-t distribution. In fact, many previous studies suggested that using the symmetric GARCH(1,1) model with the volatility following the Student-t distribution is more accurate than with that of the Normal distribution when examining financial time series. Accordingly, the paper additionally employs the Student-t GARCH(1,1) approach to measure VaRs. In this section, we use this model under the normal distributional assumption of returns. First is to estimate the model parameters using the method of maximum likelihood estimation and obtained by the STATA (see Table 3.3). Table 3.3. The parameters statistics of the Student-t GARCH(1,1) model for the FTSE 100 and the SP 500 Student-t GARCH(1,1)* Parameters FTSE 100 SP 500 0.0926120 0.0569293 0.8946485 0.9354794 0.0000011 0.0000006 + 0.9872605 0.9924087 Number of Observations 1304 1297 Log likelihood 4406.50 4399.24 * Note: In this section, we report the results from the Student-t GARCH(1,1) model using the method of maximum likelihood, under the assumption that the errors conditionally follow the student distribution with significance level of 5%. The Table 3.3 also identifies the same characteristics of the student-t GARCH(1,1) model parameters comparing to the normal-GARCH(1,1) approach. Specifically, the results of , expose that there were evidently strong ARCH effects occurred on the UK and US financial markets during the parameter estimated period, crossing from 05/06/2002 to 31/07/2007. Moreover, as Floros (2008) mentioned, there was also the considerable impact of ‘old’ news on volatility as well as a long memory in the variance. We at that time follow the similar steps as calculating VaRs using the normal-GARCH(1,1) model. 3.3.2.3. Parametric approaches under the normal distributional assumption of returns modified by the Cornish-Fisher Expansion technique The section 3.3.2.2 measured the VaRs using the parametric approaches under the assumption that the returns are normally distributed. Regardless of their results and performance, it is clearly that this assumption is impractical since the fact that the collected empirical data experiences fatter tails more than that of the normal distribution. Consequently, in this section the study intentionally employs the Cornish-Fisher Expansion (CFE) technique to correct the z-value from the assumption of the normal distribution to significantly account for fatter tails. Again, the question of whether the proposed models achieved powerfully within the recent damage time will be assessed in length in the chapter 4. 3.3.2.3.1. The CFE-modified RiskMetrics Similar