IEEE P7011 Working Group

May 25, 2018 – meeting minutes (Draft)

 

IEEE P7011 Working Group

Standard for the Process of Identifying and Rating the Trustworthiness of News Sources

May 25, 2018

Meeting Minutes Draft

skype

 

  1. Call to Order
    1. Chair called the meeting to order at 11:00 AM US ET.
  2. Roll call of Individuals
    1. The chair gave each attendee the opportunity to introduce themselves.
    2. The temporary secretary captured the attendance from Skype.
    3. Chair reminded attendees to send him an email ([email protected]) stating you wish to become a member and include name, phone number, preferred email if different than the reply-to field, area of interest, and note if you would like to become a voting member. (See IEEE P7011 5-29-18 Deck, Slide 2). If you have question regarding membership status, see the Member Roster in iMeet.
  3. Approval of 27 April 2018 Minutes
    1. Chair motioned to approve agenda.
    2. Seconded by Vicky Hailey
    3. Approved with no objections.
  4. Approval of Agenda
    1. Chair moved to approve agenda.
    2. Seconded by Vicky Hailey.
    3. Approved with no objections.
  5. Secretary: volunteer Sean La Roque-Doherty, Esq.
    1. Chair moved to approve Sean to position of Secretary.
    2. Seconded by Vicky Hailey.
    3. Approved with no objections.
  6. Chair announced volunteers for non-officer positions.
    1. Technical Editor: Ayse Kok.
    2. Developer: Giovanni Valerio.
  7. Patent Identification
    1. The chair performed a call for patents and none were identified.
  8. iMeet Review
    1. Chair announced that people have signed up for subgroups, and chair has added everyone who expressed their interest and preference.
    2. If you have not received an iMeet notification, contact the chair.
    3. The chair did an overview of iMeet, which will be used as the centralized repository for document files, discussions, resources for the standard.
    4. Chair stated that the subgroups will be where the meat of standards development will be produced.
      1. The full WG meetings presenting, discussing, and voting on the developments in subgroups.
      2. In the Files & Discussions tab for P7011 News Site Trust Working Group, the General Discussion folder is for high-level discussion of the standard that does not fit into subgroups.
      3. Project Management is where to set tasks.
        1. Presentation tasks are set for each subgroup on 6-29-18.
    5. Chair opened for questions and concerns
      1. Is there any place to find a definition of what the subgroups are working on?
        1. Chair stated that some subgroups, like Analogous Systems (AS), are straightforward, looking for other systems out there to what we’re doing. For example, Angie’s list.
        2. Same for criteria analysis groups: publisher/author and text analysis.
        3. Data Provenance a little grayer: how we’re collecting data.
      2. Would it be appropriate to charge the subgroups to come up with their own definition/goals statement?
        1. Chair agreed that is an excellent task.
      3. Are we looking at print and electronic media?
        1. [Chair] The standard is for electronic media.
        2. [Chair] Online portions of news provider/purveyors are within the scope.
  9. Chair opened to subgroup discussion, ideas, thoughts, since we last met.
     

    [Secretary note: speakers are indicated in brackets prefacing their statements. Summarized actions do not have brackets. In many instances, language was not captured verbatim in favor of recording ideas and the gist of the discussion.]
     

    1. Scope of the Standard
      1. Chair suggested we continue the scope conversation as to “What we are looking to include or not include” and what does online media: What does that mean?
      2. [Emanuel Baker]: What are your thoughts on that?
        • [Chair]: Believes that news purveyors are the most socially beneficial target for the standard.
          1. We talked about rating journalists and articles but not convinced that is the best approach.
          2. Rating or reputation system should aim to inform consumers of news.
            • Consumers don’t look at news coming from a specific journalist.
          3. The most important part of the journalistic process is the editorial process.
            • Drives the voice of news provider/purveyor.
            • Lends itself to integration and providing information to end users.
          4. Purveyor has most meaningful body of information to rate.
          5. Content either on a news purveyor site or it is linked back to a purveyor’s site.
      3. Baker poses two scenarios:
        1. It’s easy to focus on op-ed section of newspaper, whether online or in electronic format, but the truth of the matter is that a lot of the news articles are opinion pieces rather than factual news. The question is whether we focus on the entire content of the website for the newspaper or focus on just the op-ed portion.
        2. Lot of people get news from 15-second sound bites, such as cable news or blogs. To what extent can we impact these blogs?
      4. Chair read Jonathan Feinstein’s chat posted at 11:40 a.m.: “So I completely disagree. How and who is presenting information is, I believe, the only way to look at this problem. If you look at problems in the media they are both tree and forest problems. I’m not even sure that news purveyors are going to continue to exist in the manner we associate with ‘legitimate’ news outlets.”
      5. [Baker] I agree the nature of news is changing significantly. Newspapers are going out of style and moving to electronic format. I’m not sure how much electronic newspapers will have much impact.
      6. [Chair] Cable news outside the scope. Blogs are a question for the subgroup.
      7. [Jennifer Dukarski] From a media perspective. We’re seeing an explosion since 2016. Traditional print and cable broadcast network are seeing an explosion in online readership. Where media sees citizen-journalists create part of the question of what is real or fake.
      8. Secretary agreed with Jennifer. Blogs, Twitter, Facebook are often the first to publish and news outlets follow.
      9. [Hailey] From a standardization perspective, news purveyors makes sense.
        1. Standards like this are voluntary.
        2. News purveyors would want to say they are stepping up to thevoluntary standard and, therefore, we’re trustworthy. Then it becomes a question of how one can verify that.
        3. Kathryn [Bennet?] can talk to the link between assessment or certification.
        4. For the standard to be widely available, it would have to reach the blogger and the large media outlets, by making it transparent and accessible.
      10. [Secretary] Would you have an application for a purveyor to submit their site or publication to the standard?
      11. [Hailey] That’s what this group would have to figure out. What’s the verificationand validation of the trustworthiness? What are the ingredients of trustworthiness? What’s the purpose? What are the outcomes? And how can external parties, who have no interest, validate?
        1. Levels of trustworthiness according to the source.
        2. Usability: global take-up of something simple that squeezes out the troublemakers that don’t have credibility and can’t be validated for integrity.
      12. [Baker] About scope. You have news items that appear but omit facts to createa slant that fits the writers or news company’s agenda. Are we addressing omission of vital facts to create a predetermined slant to the news item.
      13. [Chair] That’s bread and butter for the text analysis subgroup. How they would approach omission, which can be as much of an issue as having incorrect information. A lot of that can be detected from bias language. I think that’s what the criteria subgroup can take a stab at.
      14. Chair read Feinstein’s chat posted at 11:48 a.m.: “I also don’t know how we could possibly create an objective standard without looking at the true source of information. That means that we need to be able to distinguish the source. Also, I want to be very clear that authors are responsible for their work. The only editors may in fact be the authors. I’m also looking to automated analysis. Collecting history information does not require author/editor authorization. Collecting history of past problems, fact checking, etc. does not require cooperation. Complaints are yet another data source. The combination of human sourced analysis and automated analysis do not require cooperation. So rating doesn’t require cooperation. I’m not too sanguine about a voluntary good housekeeping seal. This hasn’t worked well in the past – thinking of trust-e.”
      15. [Dukarski] In the context of that and media, we often think of retraction demands but they are often about defamation concerns that may have been missed quotations or … people ticked off at somebody for no reason. Retraction demands may not be the best statistical manner of trying to figure out the problem.
      16. [Baker] Special interest groups will often send in complaints and you may get a skewed picture of the situation because the special interest group has made it a job of theirs to submit these types of complaints.
      17. [Feinstein] Those types of complaints that you’re thinking about is sort of a conventional news source environment isn’t exactly what I had in mind.
        1. I was thinking take-down complaints in electronic form. It’s not clear how you want to read those signals.
        2. If you have content take down by a reputable source, such as Facebook or other large services, that may be a useful signal. I’m not thinking of things you would find in newspapers.
        3. Although retractions in some cases may not be useful as an indicator of an error score associated with the news source but may be useful to associate with the article itself. As an historical addition where, in fact, this turned out to be a problem later. Reputable sources put notes at the bottom of the article when there are changes. In true defamation cases, it’s hard to end up with a retraction that has the impact as the original article. There are difficulties in how the news source will handle the retraction that in fact involves something sufficient to cause defamation issues.
      18. [Baker] Especially when the retraction is published on page 36.
      19. [Feinstein] This is exactly the point.
      20. [Dukarski] It’s compliant with the law. That’s all they must do. But it depends on the jurisdiction.
      21. [Chair] This is a reputation rating not how well it applies to the letter of the law. I think those are not necessarily the same thing.
      22. [Feinstein] Sure. We’re just looking for signals that are already out there.
      23. [Chair] How would we consider voluntary retractions vs forced retractions?
      24. [Feinstein] I think if there is a retraction it is probably forced at some level. … There’s a whole universe out there, retractions are one possibility. It might be interesting to take people doing fact-checking and associate it with the article and source and apply that going forward in terms of reputation.
      25. [Chair] Two things on rating purveyors vs. the articles. Trying to rate an article is just-in-time. Looking at the purveyor or entity you are ahead of any piece of information from that group.
      26. [Feinstein] I would agree with you on a prospective rating. You need to look at the articles and the problems in the past and associate that with trust in future work and that needs to be at the author level. Even reputable news sources have problems associated with an author or reporter. They can pull the wool over editors’ eyes for a while.
      27. [Chair] Wouldn’t that be a statement about the purveyor, if they’re able to have a reporter pull the wool over the editors’ eyes?
      28. [Feinstein] Purveyor. What do you mean by purveyor? Are you going to distrust the New York Times because they’ve had several authors who have fabricated stories?
      29. [Chair] We’re not talking about a binary rating system. this is a spectrum rating. Those things will be considered.
      30. [Feinstein] You may have someone problematic in the past at one source andthey move to a different publication. Don’t you want to look at the person? … And what to do about AP stuff.
      31. [Tabea Wilke] It’s important how to identify the people who write. They can change names, for example. People change names, move to different outlets. You can’t identify by their names but via the language. That’s the reason why we’re working with NLP and text analysis. Cultural approach is another aspect to consider.
      32. [Feinstein] The conversation from our previous meeting, looking at the length of the history. If you have someone new, you will wonder where they came from. Perhaps this is one of the things that will be useful. If the source information has a deep history and if they disclose it; but if they don’t disclose it, it’s absent or false data; that’s also pretty good and potentially objective signals.
      33. [Baker] Typically you find there is editorial policy, e.g., the LA Times has it in for the California high-speed rail. You get several different articles in different sections lambasting the California Rail High-Speed administration. Each of the writers are components and if you see the writers have the same inclination, that gives you a clue to their direction on reporting on a topic.
      34. [Feinstein] If you start seeing different journalists working for a single news outlet, and they have a slant on a topic, you’re going to associate that with editorial as opposed to the authors or editorial in addition to the authors.
      35. [Chair to Wilke] You’re describing creating a fingerprint for an author based on their writing, work usage.
      36. [Wilke] We do it by person, so we can identify individuals. We call it DNA but yes, it’s a fingerprint.
      37. [Feinstein] DNA is better than a fingerprint. I’d go with DNA. … Where would we discuss continued belief that the subgroups as currently designated are correct? There is overlap and there will be new areas as we progress.
      38. [Chair] In the root of iMeet projects, there is a general discussion “News Site Trustworthiness General Discussion” to discuss that.
      39. Chair demonstrated the iMeet folder.
      40. [Chair] The current subgroups are not the end-all-be-all list. This is the getting started list of subgroups.
      41. Feinstein mentioned he had some trouble using Skype. The Chair said going forward we will meet using Joinme, not Skype.
      42. [Feinstein] I think the general discussion is great, but I think it’s worth having a separate thread.
      43. Chair agreed and created discussion “Subgroup General Discussion” at the root of the Files and Discussion tab in iMeet.
      44. [Chair] We don’t have a governance subgroup, for example, but we will have one at some point. I don’t think we’re there yet.
      45. Chair read Giovanni Valerio chat posted 12:05 p.m.: “it is like identifying a voice by its [h]armonic components (timbr) unless someone is trying to imitate someone else (or using a tool to “fake” him).”
      46. [Chair to Wilke] Regarding the DNA of someone’s writing, have you seen gaming attempts? How difficult has that been to address.
      47. [Wilke] It depends. You can start with simple aspects, analyzing text and language, e.g., some keywords or special order of the sentence. Is this what you’re referring to?
      48. [Chair] If we are able to, with a high degree of confidence, determine if two different articles are written by the same person, it would be a powerful tool to implement the standard.
      49. [Wilke] Yes. The question is how much transparency we want to show. In the end, the users can see the DNA of person 1 and this is the DNA of person 2 and this DNA is similar and compare the DNA. By looking at the DNA, people can see if it is technically the same or high percentage of similarities. It’s hard to set up a technical process because in the end you need more than technical. You need techniques or tools, but also people that classify or attribute text.
      50. [Feinstein] Are you looking at the time stamps and history to find the origins of a story? When stories are showing up first? [Compare how plagiarism software works.] If we can reconstruct through analytical tools, it would be powerful.
      51. [Wilke] I agree. Maybe there are two dimensions. What you are talking about is time (x-x). We not only look at the time and development of the article but look at the text itself: look at the text structure, composition, and compare to any other author’s publication. This would be the y-x when you talk of two dimensions. I think it can be combined and be quite useful.
      52. [Secretary] Can text analysis use metadata for purveyor and other information to provide a rating for a post in addition to the purveyor?
      53. [Feinstein] I think that’s where we end up heading. It seems we can identify the author, whether it is copied from somewhere else, but we won’t be able to analyze the facts. Look at the author and the site and determine whether there was a problem in the past, and it looks like their writing. It’s hard to see how you will be able to verify things or the spin on a post that you’re just now analyzing.
      54. [Chair] One consideration we must make is the separation between professional and personal writing. … Is it fair to impact an org rating based on the personal writings of someone that had nothing to do with the organization.
      55. [Feinstein] Entirely fair not to associate a post on a personal site with the editorial of a publication. But it would be hard to say you wouldn’t associate it with an author.
      56. [Baker] If we do come up with a reputation rating, how will it be implemented?
      57. [Chair] The standard would be implemented by the industry not by this group. We would publish the standard and work with another group to implement; like IEEE publishes the standard on electronics but they don’t manufacture the electronics.
      58. [Hailey] If there is some standard reputation that you’re going to be looking at, let Kathryn [Bennet] know up front so that she can bring in the right people and the level of outside reputation can be explained because it can shape how you want to construct the standard. It’s a good time to think about it. If you need an overall presentation in terms of what your options are to stimulate thinking or you can go down the path and then ask the questions.
      59. [Andrew Schroter] A lot of fake news is really focused on political news. It might be good to also have a topical rating vs an overall rating for a news purveyor.
      60. [Chair] That’s a good idea. I would encourage you to join the Scope WG.
      61. Chair read Giovanni Valerio’s chat posted 12:22 and 12:23: “I expect the “reputation” to be a rating value, maybe from 0 to 100 (o 0.0 to 1.0) but shown to the public only as a 5-6 level scale / but do we differentiate facts from opinion, satire (Josh did show that in the first sample) or other?”
      62. [Chair] I think we would have a subgroup that deals with the GUI. What this will actually look like. Design will be an important piece as to what the analysis produces. We may need to have a better idea of what we are producing before we discuss how we display it.
      63. [Valerio] What I meant. There are two views of the rating system. One, what the system calculates and the other is how it is presented. Analysis on one side and what is shown to the public another, in a simplified way — something that is usable day-to-day, such as a simple scale. … I was just pointing out two different levels: one is the text and site analysis and the other the presentation level.
    2. Data Provenance
      • Not directly addressed.
    3. Trust & Identity
      • Not directly addressed.
    4. Analogous Systems
      • Not directly addressed.
    5. Criteria for Text Analysis
      • Not directly addressed.
    6. Criteria for Publisher/Author Analysis
      • Not directly addressed.
  10. Vacant Officer Positions
    • Chair reminded attendees the Vice Chair position remains vacant. If anyone is interested reach out to the Chair.
  11. Website Development
    • This topic was not discussed.
  12. New Business
    • No new business was brought up.
  13. Next Meeting is June 29.
    1. Chair will send out a Doodle Poll for the time.
    2. [Feinstein, Baker] The Pacific Time for this meeting was difficult.
  14. Adjourn at 12:30 PM US ET
    1. [Chair] motioned to adjourn, seconded by [Secretary].
    2. There were no objections.