Translate

Thursday, March 27, 2014

Writing Update

Considering it's nearly the end of the month, I thought it would be wise to update on my progress with redrafting my latest project. I've been rather busy as of late -only yesterday I attended a six-hour assessment centre at a creative agency called Gratterpalm- and so have had to act like a circus performer doing some serious juggling. I've been trying my best to re-write and re-jig and cut/paste my way into a finished draft 2 and I'm quite pleased to say that I've been doing quite well, still getting up at 6:30 everyday to give myself time -though it does leave me pretty shagged-out when I'm on internship. The main reason for this post is something of an addendum, about what I've learned on planning for revisions of work and I guess it's all kind of a no-brainer when you think about it.

As I said in my previous posts on this subject, the key to getting through a first draft is in writing a proper quota everyday to get through it quicker, but more importantly I also said that one has to accommodate their sleeping pattern to optimise their creativity (citing Orwell's typewriter in bed instead of a hearty breakfast.) But the second half of this strategy falls down I've found because -as you'd imagine- trying to re-write i.e. thinking seriously and critically, is somewhat impaired when one's eyes are still trying to focus on the screen and becoming bloodshot quite quickly. This is why I propose that from here on out I shall not write in the mornings first thing, but will now begin to revise each chapter (around 1 per day, depending on the length) in order to complete a new draft -say- every three weeks. So far I've edited the prologue and chapters 1-9, leaving me with just 5 more chapters and an epilogue to revisit. Hopefully -considering my busy schedule with finding a year in industry for next year and looming exams- I'll have finished draft 2 of n within a fortnight!

Monday, March 24, 2014

Carat Media Internship: Journal Entry #22

During this week I had to complete some administrative stuff for the unit itself, which involved a video that I had to have checked (for client confidentiality purposes.) After all this was done I was set a few different tasks which made the day quite a varied one!

The first task was a presentation that needed producing for an airport client: A keywords campaign. I had a few different line items to analyse so what I did was I used Excel to split the data as usual but instead of analysing and annotating the spreadsheets I produced graphs showing the top decile (best 10 keywords) and produced a table showing the top three contrasted with the worst three for conversions and clicks. This was finally accompanied by a commentary on overall performance for each strategy with recommendations for the future. It was an interesting task I thought.


A Decile Analysis

My next assignment was to a DSP task where I analysed a campaign for audience characteristics using what is called a "Pixel," which tracks users so that -in our case- the client could know exactly which group of people were clicking and what their income was -for example, and match this with age, gender or any other sub-group that you could care to mention. However, not all of these categories have information within them, as they may still be building up a picture which -ideally- will occur as time goes by and the data available grows. In my case, the data available to me was limited in one or two areas but I was still able to discover some interesting things about our audience, which was intriguing.

This was my final task for the day, but between tasks I participated in another meeting, in which I listened to what the good people at one of our search retargeting data partnerships had to say in their pitch to Carat, extolling the values of search retargeting technology for us as an ad agency. The presentation was very interesting, the process of retargeting ads to approximated consumer groups via cookie-tracking pixels innovative and displayed effectively through a simple, grid-like infographic showing how data informs the process, the pixel is placed within a publisher's website and how that pixel garners more engagement as the prospecting blends with the contextual (which was also demonstrated via an amended "conversion funnel," like the one below.) 

Conversion Funnel

As you can see above, search retargeting generally fits around the middle of the scale between discovering customers and driving awareness and moving in for the sell to generate conversions and business. During the meeting I also came across a strategy that I hadn't heard of: Yes, I'd heard about Prospecting and yes I'd heard about Remarketing and finally yes, I'd heard about Contextual. But today I learnt of another strategy: Conquesting. This is basically the intense competition between one company's display ads against a competitor's editorial content e.g. an ad for Ford against an actual article about Vauxhall. It's quite popular at the minute apparently, though it does radiate with '80s-style tactics and language that came with the whole "guerrilla marketing" revolution, when marketing directors and execs started reading books about war tactics as well as Jay Conrad Levinson's classic on the subject. But putting that aside, the meeting was most enjoyable and very informative.

Thank you for reading and see you next week!

Tuesday, March 18, 2014

Carat Media Internship: Journal Entry #21

Last week was an eventful one at Carat to say the least, as I carried out a new set of tasks revolving around Sitelists. The day primarily focused on spreadsheets that showed all of our website placements from which I had to decide which ones to exclude. After analysing every site in every strategy, I had to compile a Sitelist Removal list that was later inserted into Doubleclick. As long and arduous as it may sound, analysing the sites wasn't all too bad, mainly because the only sites that I had to analyse were the ones with irrelevant URL names, e.g. in a campaign for a camping site I found sites for boating and island purchasing!

Once I'd completed the site lists I showed them to a colleague who placed them into an Exclude list on the DSP in order to remove funding from adverts there in future revisions. He explained the difference between the include and exclude functions and when they would likely be used. For example, if one were engaging in a prospecting campaign, then once many ads had been distributed to as many sites as possible to drum up business, the trial and error method of revision would be applied afterwards. This of course will mean excluding grossly under-performing sites, but for other campaign types (such as retargeting ones,) an include-lead approach may be more efficient because you know where the customers have been before and will want to include a possibly narrower site list. A very enjoyable experience that bestowed upon me new understanding of Doubleclick, which should hopefully benefit my future work.

Now moving on a second set of sheets, were produced for the same client: Day-Part analyses for keyword lists tailored to national campaigns, tackling both Scotland and Northern Ireland. These keyword lists were basically split into the various line items we were running for our client, with each list separated into hourly performance. After organising the tables I discovered the key performance for our client to be across many hours, with some lists performing in the mornings and some in the evenings. This was of course noted in the analysis.

Dayparts

My final task also revolved around keywords for a ticket site, for which I had to discern the performers from the under-performers and list them as before. This was again followed by exclusion on a basis of CPCs, click numbers and CPAs to discern the few that were performing, which in all made the day a rather successful one.

 Keyword Groups

Monday, March 10, 2014

An Essay on Analysis

HOW TO ANALYSE DATA


This essay is a compilation of pointers detailing how to think and –where appropriate- how to best use Excel to implement desires to achieve a result. Before we begin however, I feel that I should provide you with a list of abbreviations and definitions:

-       AUDIENCE REACH: Traditionally the number of people to simply see an advertisement
-       CLICK: When a display advert (e.g. a banner) is clicked on by the user
-       CONVERSION: When a desired action is completed on the part of the user AFTER clicking the advert e.g. making a purchase.
-       IMPRESSION: Every time a person simply sees the advert on a webpage (See AUDIENCE REACH)
-       LEAD: A direct marketing term referring to communications seeking SIGN-UPS of some kind for future communications, as opposed to simply making a purchase for the present.

-       CPC (Cost per Click)
-       CTR (Click-through Rate)
-       CPA (Cost per Acquisition)
-       CPL (Cost per Lead)
-       CPM (Cost per Mille or Thousand)
-       DSP (Demand-side Platform)


PART I: THE ANALYTICAL MIND

The mind of the analytical is concerned with questions. These questions can be summed up in rather simple lexis such as “Why” and “What,” these chief two words being crucial to reaching conclusions and implementing reactions to them because they, A) Discover the reasons for a particular event and B) Discover the implications for its consequences, whereas other questions such as “How” don’t really tell us much about implications or reason, but rather about technicalities, explaining the conditions that caused the event, which is useful of course, but it is not the chief interrogative that gives us the path to solving a problem because it is chiefly concerned with the past or present and not the future (OUP, 2014: Online.)


So how do you think in those terms? Well I’m not going to patronise you and give you baby steps with which I’m sure you’re accustomed, but below is a simple flow chart that I’ve devised in order to visualise an analytical thought path.


As you can see, the “How” node takes us towards the causes of a particular occurrence but it is not the reasons behind the occurrence, for instance, let’s say that I want to know how a CPC became so high and the reason or CAUSE of such an event might be that fewer people are clicking the advert. Fair enough, but why aren’t they clicking, is the question that we need to ask and the answer might be that the creative is off-key, or the timing is wrong and from this information, we can think of solutions to the crux of the issue. But this is just the introduction to our study of analysis. How do we implement this into a concrete example, where we deduce a problem and think of a solution therefrom?


PART II: AN EXCEL EXERCISE

Using Excel we can split data and make it sing and dance as we please in order to get beneath the surface. The following example is taken from a coursework task, courtesy of my academic institution. Please note that none of these names and associated data are based on a real company or client list; all information is fictitious:

Our customer has a business that offers services to multiple clients across multiple sectors. They made £5405 in 2013, but £5710 in 2012, indicating a loss of business in terms of sales. This would be evaluated as a business weakness to the untrained eye, a blanket statement that any fool would be able to spot with ease as the image below demonstrates:


The problem with this logic is that it has failed to analyse the problem, instead just pointing out the obvious. What we need to do is look deeper, so let’s examine more about this client list: Who are the people in it and where do they operate? Well, by organising the list vis-à-vis sector and grouping the sales total for each one in which our client has contracts, we can see a very different picture that changes our influences. The following graph details the sales totals in column clusters, detailing the sales value per year between 2012-2013 for each individual sector, and I can tell you that the picture looks very different indeed!


Now let’s have a closer look at those categories. The Leisure and Sports Stadia sectors are clearly increasing in sales value, which gives us a good picture with which to solve our financial problems. Not only have we identified why the problem came into being (declining sales in councils and office refurbishment,) but also what we can do about it in response: Shift focus to those growing sectors like Leisure. In completing this minor analysis we already have an idea of what we can do with the future; if we analyse further into whom those clients are and what they need from us, in addition to discovering how we can deliver that promise and we can ultimately get from this:


To this . . .


And so we can see that as we learn to present more facts as numbers alongside one another, they can be most revealing, but the ways in which we just presented those figures is only a single way of interpretation. Data –like anything made of numbers- can be expressed differently, which is why we must also consider


PART III: THE KEY EXPRESSIONS

Expressions contain not only raw numbers, but fractions, percentages and a plethora of graphs and charts used to not only demonstrate a particular conclusion, but also try to express it in a way that is the easiest to understand because whenever data is needed, those who seek it want it set before them in a way that doesn’t make the brain hurt with confusion, but rather throb with excitement!

This means that not only does the analyst need to reach the conclusion but show it, which means performing Excel functions in many cases. I’ve already shown via my Excel 101 Tutorial on the VLOOKUP function, but I haven’t demonstrated the use of Pivot Tables, a highly necessary function of the Excel software and a staple in the toolkit of the analyst.


PART III: PIVOT TABLES

Firstly, I expect that you have a whole bunch of data like this:


Of course all that’s ugly and inconclusive, just raw data in other words, data that must be broken down and reassembled properly with the hopes of reaching a more satisfactory conclusion. But first what does this sheet show?

Taken from offline as a practice Pivot Table sheet, it shows numbers of people working and earning across the United States, showing their sex, race and earnings amongst other variables: Now onto the Pivot.


Click on the “Pivot Table” function found under the DATA tab (bear in mind that I am using Excel for Mac 2011,) and you should come across a screen like this one below, (though if this does not happen first time, you may have to manually select the relevant cells via highlighting their columns detailing what is displayed their by using the Command, Shift and Direction functions. Highlighting all that you want to include and then pivoting should do the trick.)


The “Builder” will appear alongside a –most likely- filled in Pivot and now you may play with it to your heart’s desire. Usually I deselect all of the stuff Excel’s already selected for me and begin again, so un-tick all the Builder’s boxes and reselect them. You may find that you get some variables appearing on certain axes that render the table awkward to work with, which is why it is important to be wary of these:


In order to make the table work for you, you will need to change the placement of variables by dragging and dropping them into the different categories to ensure that they work for you. For example, if one of my variables is “Website,” then I don’t want every website dealt with appearing across the top of the sheet, because there’s likely going to be large numbers of individual URLs, which is why they are better placed on the Y axis running lengthways, because it makes it all easier to see. This rule generally applies to any situation where one deals with metrics via which to judge (X axis) and the subject being judged (Y axis) which in my example was a URL. Please note also that in those individual boxes, you can also place them above and below one another so that –for example- you can see every URL on a particular day of the week and judge performance against another time period etc.

In any case, what you should be left with is something akin to this:


But this is of course insufficient because all the numbers are the same and are littered with decimals, which is why you must then adjust every new column to its own type of measurement.

This can be done very simply with a column edit, so click on the tab where the column is e.g. Column E containing –in my case- a column showing each stratum’s earnings for the average year and every cell should be highlighted like so.


From here we must alter the numerical value measured within the column, which means right-clicking and –from the drop-down menu, selecting “Format Cells.” From here you must click the “Number” tab if not already highlighted and select the most relevant category, which, for me, is the “Currency” category. Now under Currency, you must select both the currency used (US$ for me) and the decimal place that you want to apply (2 decimal places is usually sufficient,) after which you should be able to see something like this:


And that should be that. You’re ready to go and analyse that data, but in order to do that we must move onto another tutorial, so once you’ve set up all your Pivot Tables and have their numbers sorted out in the columns as appropriate, read on for the next tutorial, dealing with data manipulation.


PART IV: DATA MANIPULATION

To do this I’d advise inserting what is known as a filter. You can to this simply by highlighting the columns again and clicking onto your DATA tab. Under this there should be an option to “Insert Filter,” depicting a symbol like a silver funnel. If you’ve performed this correctly you should see arrows appear at the top of each column, which basically means you’ve just inserted your filter.

So once the arrows appear, you can play with the tool. In order to do this, click the arrow and examine the black box that drops down. In the image below you can see an ascending/descending function, which –obviously- organises numerical values into the highest and lowest orders. This function also applies to lexis with alphabetical organisation. Ultimately this is known as sorting the data however, what can often be more relevant is to filter unwanted variables. Given that tables featuring variables such as individual URLs tend to possess repetitive data, it makes sense to disregard some options entirely, to shrink a table of 10,000 rows to just 10. To do this, you can check and uncheck different boxes to select and deselect certain options found within a particular column.


To demonstrate the power of a simple filter in the world of analytics, here is the pre-filtered table:


Here is the table filtered by years in education to determine who exactly has spent under 100 years there (please remember this is aggregate averages by state, not individuals!) and you can see a clear difference.


In summary, if you combine these two powerful tools you can deduce the most and least positive results, the abnormalities within a body of data, with astounding ease, which is how analysts get through spreadsheets with potentially a million rows of data in them, within as little as a few hours sometimes.


PART V: ANALYTICS WITHIN AD AGENCIES

In my quite limited experience working within an ad agency I’ve obviously had to work extensively with these tools. But in terms of marketing and advertising performance, what kinds of formulas do we use and what expressions benefit certain situations?

-       Cost per Click (CPC) is the bog-standard cost for display ads and tends to be a primary KPI (Key Performance Indicator) for clients. It is calculated via: TOTAL COST / CLICKS

-       Click-through Rate (CTR) is another popular performance metric measuring the actual marketing performance of the ad, showing engagement via the number of people who actually clicked, expressed as a percentage. It is calculated via: CLICKS / IMPRESSIONS

-       Cost per Acquisition (CPA) is another costing metric used to discern the average cost for each conversion garnered by a particular advertisement, calculated by: TOTAL COST / CONVERSIONS

-       Cost per Thousand Impressions (CPM) is a less-common metric yet still important to some clients, detailing the average cost for every one thousand impressions the ad receives. This is calculated via: TOTAL COST / IMPRESSIONS * 1000. Please note that this is also used for traditional media such as Television and Radio, Impressions of course being merely a digital term for audience reach.

-       Cost per Lead (CPL) is a similar formula to CPA but with a very crucial difference: While CPA measures the average cost for each conversion, a LEAD is an interested party to whom the communication is sent, being a form of direct marketing with a response component and a more future opportunities-centric focus, as opposed to general display marketing with an awareness-centric approach, designed for simply making a single conversion without any kind of sign-up. This of course has implications for results expected, because it may be harder –depending on target markets- for some to part with money for online purchases, whereas others may be reluctant/inert when requested to sign up to something. Here is the re-modelled calculation: TOTAL COST / LEADS GENERATED

These are the most prominent of display calculations used in industry today as I have experienced them in my quite brief time at Carat. The experience has been a riveting one it has to be said, with varied days and many tasks that –in my rather long essay above- have only been explained in the slightest detail. I will be continuing my internship series of posts but the internship itself with be ending after March I think, meaning that I’ve only got another two or so weeks to work. On the whole it’s been great!

I hope that you enjoyed this essay and/or any of my weekly blog posts you’ve read and if you, too, found it all riveting, please, let me know!

Thank You,
Jake


BIBLIOGRAPHY

Oxford University Press (2014) How http://www.oxforddictionaries.com/definition/english/how?q=how [Online] [accessed January 18th 2014]

Russel, B. (1912) Problems of philosophy OUP Paperback Ed. [Reprint: 1971-1972]