AI News, This Data Scientist Is BuzzFeed’s Secret Weapon

This Data Scientist Is BuzzFeed’s Secret Weapon

Their latest content experiment, quizzes, commonly generate over a million views each, with some quizzes racking up tens of millions of shares.

So we’ll try to really figure out what people are engaging with and turn a list of 45 items to a list of 25 items without the duds, reordered to make it most likely to share.

The idea is to cluster articles in buckets, and it’s really interesting because that reveals these latent topic interests people have.

BuzzFeed is known as a viral content company, and one of basic statistics people look at with viruses is reproduction rate.

For content, we can tell within an hour or so of publishing what type of stuff we should put prominently on the home page, promote on Twitter, things like that.

One of other things we do is try to make data accessible to our writers—give them feedback on how content is doing in a consistent and regular way.

On the HR side, we’ve done analysis of how we’ve been hiring in the past, and we’ve come up with ways to measure the productivity of certain editors and certain teams in editorial groups.

In general, we try to take the approach that we have this great editorial team that is creating great content and always experimenting.

It varies by the client, but at the same time we try to take the same approach since, ultimately, we’re trying to create BuzzFeed content.

It’s something we created at Contently because we believe in a world where marketing is helpful and where businesses grow by telling stories that people love. Take advantage of our tools and talent and come build that world with us.

<?xml version="1.0" encoding="UTF-8"?>Ten Simple Rules for Better Figures

We have seven series of samples that are equally important, and we would like to show them all in order to visually compare them (exact signal values are supposed to be given elsewhere).

There are far too many ticks: x labels overlap each other, making them unreadable, and the three-digit precision does not seem to carry any significant information.

Finally, y ticks have been completely removed and the height of the gray background boxes indicate the [−1,+1] range (this should also be indicated in the figure caption if it were to be used in an article).

We have seven series of samples that are equally important, and we would like to show them all in order to visually compare them (exact signal values are supposed to be given elsewhere).

Finally, y ticks have been completely removed and the height of the gray background boxes indicate the [−1,+1] range (this should also be indicated in the figure caption if it were to be used in an article).

Questions and answers

This section provides a large resource of useful information on 'grey areas' structured in the form of questions and answers.

What's the first step to be taken in an investigation of misconduct and who should take the first step?The first step is always to evaluate the allegation that is being made and attempt to understand clearly the type of ethics issue that is being alleged (fraud, authorship misidentification, plagiarism, etc).

Editors have the in-depth knowledge of the field, are aware generally of the kinds of research being conducted at which institutions, are often aware of the individual researchers and their reputation, and thus are in the best position (as compared to an Elsevier staff person) to evaluate claims.

Those guidelines note that the date of the publishing or copyright agreement should govern, meaning whichever journal has the earliest formal agreement should be considered the journal where the 'article of record' was published, and other duplicate publications should be retracted.

What are the different sanctions applicable?From a scientific perspective, fraud is the most serious, as it has the potential to mislead other researchers into fruitless areas of research, confuse readers, and possibly to cause harm (especially with respect to medical research and drug treatments).

On the other hand, we recognize that, after giving a particular author (or group of authors) a number of chances (certainly more than one) to learn the rules of proper publication, it would be reasonable for an editor to conclude that consideration of further papers from such an author (for some period of time) would be likely a waste of their time and resources (and those of the peer review community).

The exceptional cases we have identified are for example when a data research subject's privacy rights have been compromised or where significant harm could occur by improperly following directions or instructions given in an article.

Elsevier's view, however is that papers made available in our 'Articles in Press' (AiP) service do not have the same status as a formally published article, and we do remove offending papers from AiP (through editorial procedures which the relevant publishing contact or ScienceDirect staff can assist with).

Should an editor investigate based on anonymous publishing ethics complaints?In principle, all substantial publishing ethics complaints that reach an editor should be looked into, as there could be legitimate reasons why a complainant might wish to preserve their confidentiality.However in our experience many recent anonymous complaints lack clarity about the specific misconduct being alleged, which makes it difficult for an editor to consider the complaint in a serious fashion.

It is possible that an author could quote or copy parts of another article without attribution, unintentionally (perhaps the author intended to add the reference later but forgot), but generally this would not occur with whole articles or substantial portions of another article (as it would be difficult for this to be a 'mistake').

What if the Elsevier editor is not satisfied with the response or lack of action taken by the other publisher?The respective editors should take the primary role in resolving the issue through direct communication, but failing that or if the problem is not resolved, it may be appropriate to contact the publisher directly.

Who do we contact about this within Elsevier if we do not get a response from the author or the author's institution or if the other publisher ignores us?As noted above, we act on the basis of the Elsevier editor's views, and we do not have to wait to act (with respect to publication of notices, etc) on the other publisher/journal.

This can range from publication of the same article at virtually the same time by the same author, generally resulting from simultaneous submission, to 'self-plagiarism' (submitting the same article to a different journal some time after the original publication), to describing the same research in a slightly different way and seeking to publish the two articles.

If an author/group of authors is a serial offender (reiterated misconducts), how do you share the information amongst the different journals?If there is a group of journals where the duplicate publication is occurring, it would be useful if the respective editors discuss these matters with each other (or even raising the matter directly with the other publisher).

If a discovery is made that a paper is published of a study based upon a dataset that is simply re-run or slightly amended, and thus, lacking in originality, how should a retraction be justified and communicated to the authors?On the basis that it is redundant publication, see comment above.

Is it justified to withdraw or retract a paper based on scientific misconduct if only one author of the paper is the allegedly wrongdoing-author whilst the co-author(s) were not and could not have been aware of these practices?By jointly submitting the manuscript and permitting the corresponding author to sign the publishing agreement with Elsevier on their behalf, all authors bear joint responsibility for the scientific content and compliance with publishing ethics in the process leading to submission.

If a fairly clear-cut case of plagiarism or self-plagiarism is discovered several years after the fact, to what degree does the editor or publishing contact need to go to contact the authors for explanation, if the authors have retired, are deceased, or have moved on to whereabouts unknown?We must collectively do the best job we can.

Is it the responsibility of each individual publishing contact to ensure that journal instructions clearly indicate what is considered plagiarism, redundancy and dual submission and how these cases are pursued?The publishing contact should review current instructions with the editor, and work with the legal department to identify areas that should be clarified.

If an institutional disciplinary procedure has been started by a university against a certain author of an Elsevier paper in relation to alleged scientific misconduct, should the editor inferconsequences from the outcome of such procedure in his concurrent internal ethics procedure regarding said paper?The editor should conduct his own independent investigation in accordance with the Elsevier publishing ethics policy.

Is an editor obligated to disclose internal information regarding alleged scientific misconduct to an author's institution as part of such institution's internal disciplinary procedures or investigations?No, the editor has no such obligation, only under legal procedures if required by applicable law.

How To Find Your Target Audience And Create The Best Content That Connects70

Content marketing success starts with knowing how to find your target audience.

By the time you&#8217;re done reading, you&#8217;ll understand: Be sure to download the free audience survey and audience persona templates.

That means visitors that are likely to: Defining who your real audience is will help you focus not only on creating great content but on creating the right content.

It makes it easier to create content that establishes you as an authority in your industry, rather than creating content for its own sake.

These can include: By the time you've answered these questions, you'll have defined an understanding for each of the following: This isn't intended to be a deep, detailed process. Consider this a simple starting point.

An audience definition should ideally connect these three things: Here is what a simple audience definition could look like once you're finished analyzing your audience: '[INSERT YOUR BRAND] creates content to help and inform [INSERT DEMOGRAPHIC] so they can [INSERT ACTION] better.'

One of the big mistakes that early content marketers make is to talk about themselves and their product, rather than the things that their users really care about.

Of course, your product is helpful to your customers, but that doesn't mean that it will also be helpful to your blog audience, that group of potential customers is probably interested in a much greater variety of topics.

At CoSchedule, we make editorial calendar software, so this is a combination of social media and content scheduling topics.

As we move away (ever so slightly) from our content core and focus on what our target audience really wants to hear about, we improve the effectiveness of our content marketing and better focus in on our target audience's needs.

At the same time, this method will also help us keep the topics we are writing about connected to our true topical focus.

These topics tackle the problems that our product already addresses, but in a way that is specifically geared for what our target audience cares about.

In an early planning meeting, we came up with a few Lean UX-type user personas that were designed to help us solve problems our CoSchedule users actually needed to solve.

When you follow this type of writing process, the person reading your content will often feel like you are speaking directly to them.

From here, you can quickly determine the demographic of your most active users and determine the topics that they share in common.

This dashboard does an excellent job of telling you what your followers are interested in—specifically listing common topics and other Twitter accounts that your followers have in common.

Once the survey results are compiled, Michael shares his insights with his readers, which always results in a great discussion that serves to both confirm and/or refute his assumptions.

In fact, when we analyzed nearly 1 million headlines in the CoSchedule system, we found that the tone and topics covered on each network varied wildly.

By comparing the content that did well versus the content that performed poorly, we can get better insight into what our readers really want to be hearing about.

When users subscribe to one of our email mailing lists, they are automatically added to an email queue that will ping them about 30 days after they sign up to see if they are enjoying our content.

The purpose of this email is to solicit a response that usually generates meaningful conversation if the reader has something to share. We also use this tactic frequently with the users of our application.

Here's how to set up such an email using several of top email marketing platforms: You can learn a lot by being an active member in your own community on social media.

This tool makes it easy to do the following: This video provides a basic introduction on how to use it for competitive comparison and audience research purposes:

It's possible that your current content consumers (social followers, blog readers, email subscribers, etc.) may not be the same people who are buying your product or service.

Making assumptions about your audience is one of the worst things a content marketer can do. We end up creating content that makes our audience feel stupid.

They're shortcuts to critical thinking that allow us to feel like we can assess an individual or situation quickly and easily.

However, when you make an assumption about your audience that is incorrect, you run the risk of creating content that alienates them. Two reactions happen in a reader when faced with a false assumption: The result is the same either way.

This is especially true if you work in a niche you aren't directly passionate about (for example, you could be passionate about creating content while serving an audience in an industry you're not familiar with).

A study found that 77% care about real people in their lives, not brands. Participants felt that relationships were reserved for family, friends, and acquaintances or colleagues.

When you create content that assumes your audience starts by caring about your brand and that it only gets better from there, you've already lost.

You're better off—if you're going to make an assumption—to assume they don't care and that you have to earn your way into their peripheral vision.

But we have readers who are both new and experts who are reading this blog, and they may stumble upon a blog post (through search) out of the order we might have written them in.

need to assume that this is the first time the reader has seen the term, and identify that call to action (CTA) and search engine optimization (SEO) are what I'm talking about the first time I use it before I can use the acronym for the rest of the post.

How to review a paper

As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts.

Writing a good review requires expertise in the field, an intimate knowledge of research methods, a critical mind, the ability to give fair and constructive feedback, and sensitivity to the feelings of authors on the receiving end.

As a range of institutions and organizations around the world celebrate the essential role of peer review in upholding the quality of published research this week, Science Careers shares collected insights and advice about how to review papers from researchers across the spectrum.

consider four factors: whether I'm sufficiently knowledgeable about the topic to offer an intelligent assessment, how interesting I find the research topic, whether I’m free of any conflict of interest, and whether I have the time.

- Eva Selenko, senior lecturer in work psychology at Loughborough University in the United Kingdom I'm more prone to agree to do a review if it involves a system or method in which I have a particular expertise.

I've heard from some reviewers that they're more likely to accept an invitation to review from a more prestigious journal and don't feel as bad about rejecting invitations from more specialized journals.

I do this because editors might have a harder time landing reviewers for these papers too, and because people who aren't deeply connected into our research community also deserve quality feedback.

I also consider the journal.I am more willing to review for journals that I read or publish in.Before I became an editor, I used to be fairly eclectic in the journals I reviewed for, but now I tend to be more discerning, since my editing duties take up much of my reviewing time.

I look for specific indicators of research quality, asking myself questions such as: Are the background literature and study rationale clearly articulated?

(Then, throughout, if what I am reading is only partly comprehensible, I do not spend a lot of energy trying to make sense of it, but in my review I will relay the ambiguities to the author.) I should also have a good idea of the hypothesis and context within the first few pages, and it matters whether the hypothesis makes sense or is interesting.

I do not focus so much on the statistics—a quality journal should have professional statistics review for any accepted manuscript—but I consider all the other logistics of study design where it’s easy to hide a fatal flaw.Mostly I am concerned with credibility: Could this methodology have answered their question?

- Michael Callaham, emergency care physician and researcher at the University of California, San Francisco Most journals don't have special instructions, so I just read the paper, usually starting with the Abstract, looking at the figures, and then reading the paper in a linear fashion.

(In my field, authors are under pressure to broadly sell their work, and it's my job as a reviewer to address the validity of such claims.) Third, I make sure that the design of the methods and analyses are appropriate.

After that, I check whether all the experiments and data make sense, paying particular attention to whether the authors carefully designed and performedthe experiments and whether they analyzed and interpreted the results in a comprehensible way.

I also scout for inconsistencies in the portrayal of facts and observations, assess whether the exact technical specifications of the study materials and equipment are described, consider the adequacy of the sample size and the quality of the figures, and assess whether the findings in the main manuscript are aptly supplemented by the supplementary section and whether the authors have followed the journal’s submission guidelines.

In addition to considering their overall quality, sometimes figures raise questions about the methods used to collect or analyze the data, or they fail to support a finding reported in the paper and warrant further clarification.

- Fátima Al-Shahrour, head of the Translational Bioinformatics Unit in the clinical research program at the Spanish National Cancer Research Centre in Madrid Using a copy of the manuscript that I first marked up with any questions that I had, I write a brief summary of what the paper is about and what I feel about its solidity.

Nothing is “lousy” or “stupid,” and nobody is “incompetent.” However, as an author your data might be incomplete, or you may have overlooked a huge contradiction in your results, or you may have made major errors in the study design.

Unless the journal uses a structured review format, I usually begin my review with a general statement of my understanding of the paper and what it claims, followed by a paragraph offering an overall assessment.

I may, for example, highlight an obvious typo or grammatical error, though I don’t pay a lot of attention to these, as it is the authors’ and copyeditors’ responsibility to ensure clear writing.

A review is primarily for the benefit of the editor, to help them reach a decision about whether to publish or not, but I try to make my reviews useful for the authors as well.I always write my reviews as though I am talking to the scientists in person.

want to help the authors improve their manuscript and to assist the editor in the decision process by providing a neutral and balanced review of the manuscript’s strengths and weaknesses and how to potentially improve it.

try to be constructive by suggesting ways to improve the problematic aspects, if that is possible, and also try to hit a calm and friendly but also neutral and objective tone.

If I'm pointing out a problem or concern, I substantiate it enough so that the authors can’t say, “Well, that's not correct” or “That's not fair.”I work to be conversational and factual, and I clearly distinguish statements of fact from my own opinions.

So now, I only sign my reviews so as to be fully transparent on the rare occasions when I suggest that the authors cite papers of mine, which I only do when my work will remedy factual errors or correct the claim that something has never been addressed before.

Major comments may include suggesting a missing control that could make or break the authors’ conclusions or an important experiment that would help the story, though I try not to recommend extremely difficult experiments that would be beyond the scope of the paper or take forever.

- Boatman-Reich My reviews tend to take the form of a summary of the arguments in the paper, followed by a summary of my reactions and then a series of the specific points that I wanted to raise.

Mostly, I am trying to identify the authors’ claims in the paperthat I did not find convincing and guide them to ways that these points can be strengthened (or, perhaps, dropped as beyond the scope of what this study can support).

If I find the paper especially interesting (and even if I am going to recommend rejection), I tend to give a more detailed review because I want to encourage the authors to develop the paper (or, maybe, to do a new paper along the lines suggested in the review).

Then, I divide the review in two sections with bullet points, first listing the most critical aspects that the authors must address to better demonstrate the quality and novelty of the paper and then more minor points such as misspelling and figure format.

usually don’t decide on a recommendation until I’ve read the entire paper, although for poor quality papers, it isn’t always necessary to read everything.

Generally, if I can see originality and novelty in a manuscript and the study was carried out in a solid way, then I give a recommendation for “revise and resubmit,” highlighting the need for the analysis strategy, for example, to be further developed.

The fact that only 5% of a journal’s readers might ever look at a paper, for example, can’t be used as criteria for rejection, if in fact it is a seminal paper that will impact that field.

- Callaham If the research presented in the paper has serious flaws, I am inclined to recommend rejection, unless the shortcoming can be remedied with a reasonable amount of revising.

Also, I take the point of view that if the author cannot convincingly explain her study and findings to an informed reader, then the paper has not met the burden for acceptance in the journal.

- Giri This varies widely, from a few minutes if there is clearly a major problem with the paper to half a day if the paper is really interesting but there are aspects that I don't understand.

Occasionally, there are difficulties with a potentially publishable article that I think I can't properly assess in half a day, in which case I will return the paper to the journal with an explanation and a suggestion for an expert who might be closer to that aspect of the research.

It's OK for a paper to say something that you don't agree with.Sometimes I will say in a review something like, “I disagree with the authors about this interpretation, but it is scientifically valid and an appropriate use of journal space for them to make this argument.”If you have any questions during the review process, don't hesitate to contact the editor who asked you to review the paper.

Such judgments have no place in the assessment of scientific quality, and they encourage publication bias from journals as well as bad practices from authors to produce attractive results by cherry picking.

We like to think of scientists as objective truth-seekers, but we are all too human and academia is intensely political, and a powerful author who receives a critical review from a more junior scientist could be in a position to do great harm to the reviewer's career prospects.

I solved it by making the decision to review one journal article per week, putting a slot in my calendar for it, and promptly declining subsequent requests after the weekly slot is filled—or offering the next available opening to the editor.

Fact vs. Theory vs. Hypothesis vs. Law… EXPLAINED!

Think you know the difference? Don't miss our next video! SUBSCRIBE! ▻▻ ↓ More info and sources below ↓ Some people try to attack ..

Is Most Published Research Wrong?

Mounting evidence suggests a lot of published research is false. Check out Audible: Support Veritasium on Patreon: .

Psychological Research - Crash Course Psychology #2

You can directly support Crash Course at Subscribe for as little as $0 to keep up with everything we're doing. Also, if you ..

Map of Computer Science

The field of computer science summarised. Learn more at this video's sponsor Computer science is the subject that studies what ..

The 4 Sentence Cover Letter That Gets You The Job Interview

Join career expert and award-winning author Andrew LaCivita as he teaches you exactly how to write the 4 sentence cover letter that gets you the job interview!

How We Make Memories - Crash Course Psychology #13

You can directly support Crash Course at Subscribe for as little as $0 to keep up with everything we're doing. Also, if you ..

25 Tips To Get More Instagram Followers | Hacks From A Full Time Instagrammer

My Instagram hacks series... Want to get more Instagram followers? My Instagram tips will teach .

Andrew Stanton: The clues to a great story

Filmmaker Andrew Stanton ("Toy Story," "WALL-E") shares what he knows about storytelling -- starting at the end and working back to the ..

How BuzzFeed Built A Great Data Experience Using BigQuery and Looker (Cloud Next '18)

Buzzfeed gets 9 billion page views monthly, and our content publishers rely on data to inform decisions around what sort of stories should be written, when they ...

Top hacker shows us how it's done | Pablos Holman | TEDxMidwest

Never miss a talk! SUBSCRIBE to the TEDx channel: You think your wireless and other technology is safe? From Blue Tooth to automobile ..