Wednesday, August 26, 2020

Eli Lilly and Company: Innovation in Diabetes Care Essay

Eli Lilly and Company has achievement in produce and sells insulin in the United States in 1923 and in 1995 Eli Lilly has ruled the world insulin advertise with another organization. In any case, Eli Lilly has pass up on some of its’ opportunity in diabetes care when it attempting to sell its’ item to the world. What turned out badly with Eli Lilly during that time? Here are hardly any focuses. Most importantly, Lilly has making a decent attempt to improve their item. Yet, as the case makes reference to when Lilly’s â€Å"Match† item come out, it become an opponent wares to Lilly’s own old item. With regards to item life cycle, it genuine that organization needs to put out new item before the old item become less income yet in this circumstance for Eli Lilly is extraordinary. On account of the market rate it hold in US diabetes care showcase is around 80% and the new item will hurt the income the old item produce. So Eli Lilly really chose not to put it on to the market. It was anything but a cunning thought of deciding to considering its’ income rather than client need. Furthermore, with regards to asking the client what they need. Eli Lilly asked an inappropriate client. Rather than asking the individuals who are really utilizing diabetes care item, Eli Lilly went into specialist. What the specialist needs is absolutely inverse from what the patient need, the specialist need the client come to them consistently in light of the fact that that’s the way that specialist win cash from. In any case, patient or clients need to have the option to control it without anyone else. When Eli Lilly focused on wrong client it in the long run hurt its’ income. Nova is thinking about the fundamental contender for Eli Lilly. Be that as it may, both two organizations set up comparable item into showcase. They attempt to put out the most current item so as to pull in more clients and increase more benefit from it. In any case, does the client truly need those most up to date items. It turns into the inquiry between should the item be customers request direction or innovation execution direction. For this situation, the appropriate response would most likely buyers request direction. Despite the fact that two organizations put out great item however the client used to the item they as of now have. Furthermore, the client are delicate to the cost since those items are utilizing just hardly any occasions and should be change constantly. Despite the fact that there are a few sections that Eli Lilly flop on it yet it make a decent attempt to comprehend what their client truly need. Eli Lilly discovered that client would ready to utilize their new item on the off chance that they did some change to it, for example, increasingly simple for clients to utilizing new item or utilizing other innovation rather than need to infuse. Also, progressively significant is that Eli Lilly discovered that the vast majority of their clients don’t have enough of data about diabetes care. So Eli Lilly set up a Controlled Diabetes Services program (CDS) which teaching individuals and building a network of patients the estimation of their insulin treatment. I think Eli Lilly settled on a decent choice about setting up CDS which can in a roundabout way comprehend what their clients require and permit more individuals have greater chance to become more acquainted with their item and utilize their item. Concentrating on know their client and teaching their client will tell an ever increasing number of individuals their new item and structure a propensity for utilizing their new item. On the off chance that CDS effectively, at that point Eli Lilly new item, for example, Match and Insulin pens will have less issue when it deals available. Rather than keep on contend with comparable item available, Eli Lilly should search for new chances. Eli Lilly ought to approach their clients what their incentive for the item, how they use it to discover the requirement for client. Additionally Eli Lilly need to secure their present market. In pharmaceutical industry, when you lost a client, it will be more earnestly to understand that client back. At the point when clients change to other medication, it will take client effort to become acclimated to the new medication and once client used to the new medication. They will increase regal to it as a result of changing expense and inconveniences it may happen. Indeed, even Eli Lilly have botch a few chances in diabetes care showcase yet its’ attempting to utilizing the assets and new item to recapture their bit of leeway to the diabetes care advertise for instance the CDS program will pick up income, new client and open new market for it. In any case, alongside CDS program, it would need to set up more items. What's more, I do imagine that contrast with innovation execution direction, Eli Lilly ought to go with purchasers request direction. Since of the innovation execution direction serious is solid and it couldn't gain more benefit from it. By changing to buyers request direction will give it more pick and openings.

Saturday, August 22, 2020

Toni MorrisonS Essay Recitatif Essays - Recitatif,

Toni Morrison'S Essay ?Recitatif? Imprint Sommers Feldman 11/27/99 Recitatif Toni Morrison's exposition, ?Recitatif? is around two young ladies, Twyla and Roberta, who experience childhood in a shelter in light of the fact that their moms couldn't appropriately think about them. The fundamental subject in ?Recitatif? manages bigotry. An intriguing turn is the secret of the young ladies' race. Leaving signs, yet never expressing whether Twyla or Roberta was dark or white, Morrison clarifies that the young ladies originate from various ethnic foundations. At a certain point in the exposition Twyla remarks, ?that we looked like salt and pepper.? Because of the way that the story is told in the principal individual, it appears to be normal for the peruser to relate Twyla with himself/herself. ?Recitatif? ends up being a critical test, ?toying? with the peruser's feelings and adequately noticing cliché races and their attributes. Morrison never expresses the race of the young ladies for a reason: to make the peruser structure his/her own sentiment. The story starts with Twyla's mom dropping her off at the halfway house. There she met Roberta, who turned into her closest companion, holding since they were not genuine vagrants with ?excellent dead guardians in the sky.? Rather than being ?genuine? vagrants, they were simply deserted children whose mother's didn't need them. In spite of the fact that the young ladies had hardly any companions, their lives didn't need experience. For instance, they delighted in keeping an eye on the young ladies who jumped at the chance to smoke and move, and tragically got a giggle out of shouting mean things at Maggie, the lady who couldn't protect herself since she was quiet. One of the last occasions the young ladies saw each other in the shelter was the day of the excursion. Not long after the outing Roberta's mom came to take her home, denoting the principal little break in their companionship. Whenever they saw each other was years after the fact in the café that Twyla worked. Roberta acts icily towards Roberta mostly on the grounds that she was high off of medications, on her approach to see a Jimi Hendrix show. Twyla was profoundly annoyed that her previous closest companion would treat her so gravely. After twelve years they meet again at a supermarket. Roberta wedded a rich man and was currently called Mrs. Benson; she was wearing dimonds and talked a lot more pleasant to Twyla. At this point, Twyla has one youngster and Roberta has four. Abnormally, Roberta acts very agreeable, similar to she has met her tragically deceased closest companion. Twyla can't keep down her feelings and questions Roberta about their last experience at the eatery. Roberta disregards it, ?Oh, Twyla, you know how it was back then: black?white. You realize how everything was.? A benevolent farewell and the ladies head out in their own direction once more. The third time they meet is at the sch ool where Roberta's children join in. Roberta and different moms were picketing in light of the fact that they didn't need their children to be isolated. This prompted a battle that would be not settled until Twyla and Roberta meet for a last time, cutting off any last possibility of companionship for the ladies. The issue lies inside the hearts of two exceptional ladies, two cherished companions, and two distinct races. ?Recitatif? provokes the peruser to not be critical toward of the either young ladies and acknowledge their shading. Morrison offers hints to urge the peruser to make presumptions about the young ladies' race. From the earliest starting point the creator attests that one young lady is dark and one is white, however not which will be which. There are numerous examples that Morrison utilizes things that are characteristically ?dark? or then again ?white,? nearly imploring one to induce the race of every young lady. In spite of the fact that there is no response to the puzzle, what one chooses for himself/herself says something regarding his/her own ethnic foundation. Morrison flourishes off the generalizations individuals have set for blacks and whites. For instance, Twyla's mom revealed to her that ?those? individuals smelled amusing in light of the fact that they didn't wash their hair. This may propose that Roberta was dark on the grounds that many dark individuals don't wash their hair frequently. Then again she could have been discussing the vagrants not washing appropriately which could make them smell ?amusing.? Everything is by all accounts a hazy area. The evening of the cookout when her mom dropped by, Twyla was humiliated on the grounds that

Tuesday, August 11, 2020

Whats trending My Computer Vision final project

What’s trending My Computer Vision final project The last couple weeks of my fall semester were almost entirely consumed by my final project for 6.869: Computer Vision. So I thought I would share with you guys what me and Nathan, my project partner came up with! For those of you who aren’t familiar, Computer Vision is the study of developing algorithms resulting in the high-level understanding of images by a computer. For instance, here are some questions you might be able to solve using computer vision techniques, all of which we tackled on our psets for the class. Given a simple natural image, can you reconstruct a 3-d rendering of the scene? On the left: a picture of a simple world comprised only of simple prisms with high-contrast edges on a smooth background. On the right: a 3-d representation of the same world. Given several images that are of the same scene but from different angles, can you stitch them together into a panorama? On the left: original photos from the same landscape. On the right: the same photos stitched together into a single image.   Given an image of a place, can you sort it into a particular scene category? This was actually the focus of an earlier project we did for this class. The project was called the Miniplaces Challenge. We were given 110k 128 x 128 images, each depicting a scene, and each labeled with one of 100 categories. We used these examples to train a neural network that, given a scene image, attempted to guess the corresponding category. Our network was able to attain 78.8% accuracy on the test set. If you’re interested, our write-up for the project can be found here! The ground-truth categories for these scenes are, clockwise from the top left: bedroom, volcano, golf course, and supermarket.   For the final project, our mandate was broad: take our ingenuity and the techniques that we had learned over the course of the class and come up with a project that contributes something novel to the realm of computer vision. There were some pre-approved project ideas, but my partner and I decided to propose an original idea. I worked with Nathan Landman, an MEng student from my UROP group. We wanted to work on something that would tie into our research, which, unlike a lot of computer vision research, deals with artificial, multimodal images, such as graphs and infographics. We decided to create a system that can automatically extract the most important piece of information from a line graph: the overarching trend. Given a line graph in pixel format, can we classify the trend portrayed as either increasing, decreasing, or neither? The challenge: identify the trend portrayed in a line graph   Line graphs show up everywhere, for instance to emphasize points in the news. Left: a graph of the stock market from the Wall Street Journal. Right: a stylistically typical graph from the Atlas news site.   This may seem like a pretty simple problem. For humans, it is often straightforward to identify if a trendline has a basically increasing or decreasing slope. It is a testament to the human visual system that for computers, this is not a trivial problem. For instance, imagine the variety of styles and layouts you could encounter in a line chart, including variations in color, title and legend placement, and background, that an automatic algorithm would have to handle. In their paper about Reverse-engineering visualizations, Poco and Heer  [1] present a highly engineered method for extracting and classifying textual elements, like axis labels and titles, from a multimodal graph, but do not attempt to draw conclusions about the data contained therein. In their Revision paper, Savva, Kong, and others [2] have made significant progress extracting actual data points from bar and pie charts, but not line graphs. In other words, identifying and analyzing the actual underlying data from a li ne graph is not something that we were able to find significant progress on. And what if we take it one step further, and expand our definition of a line graph beyond the sort of clean, crisp visuals we imagine finding in newspaper articles? What if we include things like hand-drawn images or even emojis? Now recognizing the trendline becomes even more complicated. The iPhone graph emoji, as well as my and Nathans group chat emoji. But wait…why do I need this info in the first place?   Graphs appear widely in news stories, on social media, and on the web at large, but if the underlying data is available only as a pixel-based image, it is of little use. If we can begin to analyze the data contained in line graphs, there are applications for sorting, searching, and captioning these images. In fact, a study co-authored by my UROP supervisor, Zoya Bylinskii, [3] has shown that graph titles that reflect the actual content of the chart â€"i.e., mentions the main point of the data portrayed, not just what the data is aboutâ€"are more memorable. So our system, paired with previous work on extracting meaningful text from a graph, could actually be used to generate titles and captions that lead to more efficient transmission of data and actually make graphs more effective. Pretty neat, huh? First things first: We need a dataset   In order to develop and evaluate our system, we needed a corpus of line graphs. Where could we get a large body of stylistically varied line graphs? Easyâ€"we generated it ourselves! Some example graphs we generated for our dataset, and their labels.   We generated the underlying data by taking a base function and adding some amount of noise at each point. Originally, we just used straight lines for our base functions, and eventually we expanded our collection to include curvier shapesâ€"sinusoids and exponentials.  Using Matplotlib, we were able to plot data with a variety of stylistic variations, including changes in line color, background, scale, title, legend…the list goes on. Examples of curvier shapes we added to our data set. Using this script, we generated 125,000 training imagesâ€"100,000 lines, 12,500 sinusoids, and 12,500 exponentialsâ€"as well as 12,500 validation images. (“Train” and “validation” sets are machine learning concepts. Basically, a training set is used to develop a model, and a validation set is used to evaluate how well different versions of the model work, after the model has already been developed.) Because we knew the underlying data, we were able to assign the graph labels ourselves based on statistical analysis of the data (see the write-up for more details). But we also wanted to evaluate our system against some real-world images, as in, stuff we hadn’t made ourselves. So we also scraped (mostly from carefully worded Google searches), filtered, and hand-labeled a bunch of line graphs in the wild, leading to an authentic collection of 527 labeled graphs. Examples of images we collected (and hand-labeled) for our real-world data set. We hand-label so machines of the future don’t have to! Two different ways to find the trendline We were curious to try out and compare two different ways of tackling the problem. The first is a traditional, rules-based, corner-cases-replete approach where we try to actually pick the trendline out of the graph by transforming the image and then writing a set of instructions to find the relevant line. The second is a machine learning approach where we train a network to classify a graph as either increasing, decreasing, or neither. Each approach has pros and cons. The first approach is nice because if we can actually pick out where in the graph the trendline is, we can basically reconstruct a scaled, shifted version of the data. However, for this to work, we need to place some restrictions on the input graphs. For instance, they must have a solid-color background with straight gridlines. Thus, this approach is targeted to clean graphs resembling our synthetically generated data. The second, machine-learning approach doesn’t give us as much information, but it can handle a lot more variation in chart styling! Anything from our curated real-world set is fair game for this model. Option 1: The old-school approach A diagram of the steps we use to process our image in order to arrive at a final classification. Here is a brief overview of our highly-engineered pipeline for determining trend direction: Crop to the actual chart body by identifying the horizontal and vertical gridlines and removing anything outside of them. Crop out the title using an out-of-the-box OCR (text-detection) system. Resize the image to 224 x 224 pixels for consistency (and, to be perfectly frank, because this is the size we saved the images to so that we could fit ~140k on the server). Color-quantize the graph: assign each pixel to one of 4 color groups to remove the confusing effects of shading/color variation on a single line. Get rid of the background. Remove horizontal and vertical grid lines. Pick out groups of pixels that represent a trend line. We do this robustly using a custom algorithm that sweeps from left to right, tracing out each line. Once we have the locations of the pixels representing a trend line, we convert these to data points by sampling at a consistent number of x points and taking the average y value for that x-value. We can then perform a linear regression on these points to decide what the salient direction of the trend is. As you can see, this is pretty complicated! And it’s definitely not perfect. We achieved an accuracy of 76% on our synthetic data, and 59% on the real-world data (which is almost double the probability you would get from randomly guessing, but still leaves a lot to be desired). Option 2: The machine-learning approach   The basic idea of training a machine learning classifier is this: collect a lot of example inputs and label them with the correct answer that you want your model to predict. Show your model a bunch of these labeled examples. Eventually, it will learn for itself what features of the input are important and which should be ignored, and how to distinguish between the examples. Then, given a new input, it can predict the right answer, without you ever having to tell it explicitly what to do. For us, the good news was that we had a giant set of training examples to show to our model. The bad news is, they didn’t actually resemble the real-world input that we wanted our model to be able to handle! Our synthetic training data was much cleaner and more consistent than our scraped real-world data. Thus, the first time we tried to train a model, we ended up with the paradoxical results that our model scored really wellâ€"over 95% accuracyon our synthetic data, and significantly worseâ€"66%on the real-world test set. So how can we get our model to generalize to messy data? By making our train data messier! We did this by mussing up our synthetic images in a variety of ways: adding random snippets of text to the graph, adding random small polygons and big boxes in the middle of it, and adding noise to the background. By using these custom data transformations, and by actually training for less time (to prevent overfitting to our very specific synthetic images), we achieved comparable results on our synthetic dataâ€"over 94% accuracyâ€"and significantly better real-life performanceâ€"84%. A diagram demonsrating the different transformations our images went through before being sent to our network to train. Looking at the predictions made by our model, we actually see some pretty interesting (and surprising) things. Our model generalizes well to variations in styling that would totally confuse the rules-based approach, like multicolored lines, lots of random text, or titled images. That’s pretty coolâ€"we were able to use synthetically generated data to train a model to deal with types of examples it had never actually seen before! But it still struggles with certain graph shapesâ€"especially ones with sharp inflection points or really jagged trends. This illustrates some of the shortcomings in how we generated the underlying data. On the left: surprising successes of our graph, dealing with a variety of stylisic variants. On the right: failures we were and were not able to fix by changing our training setup. The top row contains graphs that our network originally mislabeled, but that it was able to label correctly after we added curvier base functions to our data set. The bottom row contains some examples of images that our final network misclassifies. A note on quality of life during the final project period Unanimously voted (2-0) favorite graph of the project, and correctly classified by our neural net. Computer Vision basically ate up Nathan’s and my lives during the last two weeks of the semester. It lead to several late nights hacking in the student center, ordering food and annotating data. But we ended up with something we’re pretty proud of, and more importantly, a tool that will likely come in useful in my research this semester, which has to do with investigating how changing the title of a graph influences how drastic of a trend people remember. If you are interested in reading more about the project, you can find our official write-up here. Until next time, I hope your semester stays on the up and up! References (only for papers Ive referenced explicitly in this post; our write-up conains a full source listing.) J. Poco and J. Heer. Reverse-engineering visualizations: Recovering visual encodings from chart images. ComputerGraphics Forum (Proc. EuroVis), 2017. M. Savva, N. Kong, A. Chhajta, L. Fei-Fei, M. Agrawala,and J. Heer. Revision: Automated classification, analysis and redesign of chart images. ACM User Interface Software Technology (UIST), 2011.  M. A. Borkin, Z. Bylinskii, N. W. Kim, C. M. Bainbridge,C. S. Yeh, D. Borkin, H. Pfister, and A. Oliva. Beyond memorability: Visualization recognition and recall. IEEE Transactions on Visualization and Computer Graphics, 22(1):519â€"528, Jan 2016. Post Tagged #6.869