Knowlengr

  • About
    • CiteULike Snapshot
    • Privacy Policy
  • Sites
  • Blog
    • Privacy & Regulatory
    • Machine intelligence
      • IBM Watson
      • Natural Language Processing
  • Subscribe

Algorithm Arrogance at Facebook

2016-06-29 By knowlengr

Pope Paul V - wikipedia, portrait by Caravaggio | https://en.wikipedia.org/wiki/Pope_Paul_V#/media/File:Paul_V_Caravaggio.jpgPosted to a Marketplace report on the most recent content stream tweak by Facebook:

It’s algorithm arrogance. There are many data science specialists working at Facebook, but there is reason to believe the new stream tweaks will not improve appreciably. One reason: users have no way to designate content you *do not* want to see (perhaps ever). Another: Facebook search is so unfriendly that search is rarely used to discover what you *do* want to read. (It’s part of the ever-popular toilet paper roll user interface). In other words, there’s plenty of data but not enough of the right sort to improve personalized relevance. Sure, not everyone would use a recommendation / search facility, but for those who do, the results would improve. The data “science” folks have become so algorithm-arrogant that you’d be hard pressed to even find a resource to personalize and improve your feed — with more data.

Filed Under: Machine intelligence Tagged With: data science, Facebook, knowledge management, machine intelligence, recommendation engines

Chasing Big Data Variety: Predictive Analytics, Meet Your Market Foe

2015-04-30 By knowlengr

 

Linkedin Stock Price Graph - Yahoo Finance via Google Search 20150430 (screenshot)

The graphic shows the market behavior of LinkedIn’s stock price late afternoon of 2015-04-30. Did your analytics engine (What’s an analytics engine? See International Institute for Analytics) predict this? If not, what (big?) data were you missing?

If not, chances are, yours was a Big Data Variety problem. Correlating with, for example, only Facebook, Pinterest and other social media platforms may have been a tipoff, but not enough to forecast a 25% single day plunge.

And before you reach for the “Sell” button, you might want to revisit this two-year-old story on Forbes, when the stock price also fell. Did your analytics take that into account? The loss was less dramatic, but the cause was similar.

You may need data from other sources, and more than just sniffing URLs from corporate PR departments a la Selerity. Perhaps your forecasting engine treated that as just a day’s or a quarter’s data point, without consideration of the underlying cause. A mix of complex event processing combined with other types of machine intelligence might have had better results.

Filed Under: Machine intelligence Tagged With: analytics, Big Data, big data variety, machine intelligence, stock forecasting

Why Computers (and Doctors) Need Narratology

2014-12-07 By knowlengr

Image of the StoryTellers Cafe
Image Credit: Loren Javier | Flickr

The analysis by Peter Kramer @PeterDKramer in the New York Times story “Why Doctors Need Stories” points, in part, to the challenge faced by clinical decision support systems (CDSS) — and the use of artificial intelligence in health care more generally. While CDSS adoption lags far behind its apparent value, it is true that CDSS is weak when it comes to sense-making from narrative. The latter is still a subject of much research in cognitive psychology, with much work remaining to be done. The widespread familiarity with machine learning and keyword search perhaps hides the importance of vignette-driven inference. And the point should probably apply beyond health care to other software-assisted analytics. Therein is to be found the real human role as knowledge worker.

IBM Watson? Work on your narratology.

Filed Under: IBM Watson, Knowledge Management, Natural Language Processing Tagged With: cognition, EHR, knowledge management, patientcentered, privacy

Celebrity’s Anonymous Pen Name ‘Outted’ by Software

2013-07-18 By knowlengr

JGAAP (Java Graphical Authorship Attribution Program)
JGAAP (Java Graphical Authorship Attribution Program)

The role that software plays in stylistic analysis of text is perhaps less surprising to high school and college students than to the general public. The former must submit essays they write to style analysis performed by software which looks for plagiarism and sometimes also makes quality assessments.

In the recent outing of J.K. Rowling as the writer behind the pen name Robert Galbraith, it was mentioned that software had been used to analyze the text of the Galbraith novel.  There exists a family of software used by academics for “authorship attribution,” e.g., to discover, for example, whether a recently discovered manuscript was a missing chapter of Don Quijote (a fabricated example). One of these applications is JGAAP, for Java Graphical Authorship Attribution Program. The JGAAP wiki page explains the project as

. . . Java-based, modular, program for textual analysis, text categorization, and authorship attribution i.e. stylometry / textometry. JGAAP is intended to tackle two different problems, firstly to allow people unfamiliar with machine learning and quantitative analysis the ability to use cutting edge techniques on their text based stylometry / textometry problems, and secondly to act as a framework for testing and comparing the effectiveness of different analytic techniques’ performance on text analysis quickly and easily. JGAAP is developed by the Evaluating Variation in Language Laboratory (EVL Lab) and released under the AGPLv3.

How this was accomplished was explained by one of two academic investigators credited with the analysis (along with some suspicions by reports at the Sunday Times) at . Patrick Juola, in the blog Language Log. Juola refers to this subdiscipline as “forensic stylography.”

A one-paragraph extract from Juola’s blog post follows. Note that, in the usual sense of the word, the analysis doesn’t look directly at “meaning.”

The heart of this analysis, of course, is in the details of the word “compared.” Compared what, specifically, and how, specifically. I actually ran four separate types of analyses focusing on four different linguistic variables. While anything can in theory be an informative variable, my work focuses on variables that are easy to compute and that generate a lot of data from a given passage of language. One variable that I used, for example, is the distribution of word lengths. Each novel has a lot of words, each word has a length, and so one can get a robust vector of <X>% of the words in this document have exactly <Y> letters. Using a distance formula (for the mathematically minded, I used the normalized cosine distance formula instead of the more traditional Euclidean distance you remember from high school), I was able to get a measurement of similarity, with 0.0 being identity and progressively higher numbers being greater dissimilarity.

 

Filed Under: Blog type, Natural Language Processing Tagged With: machine intelligence, natural language, text processing

Will $100M Trickle Watson Down to SMB Enterprises?

2011-06-02 By knowlengr

IBM Watson

Bloomberg News reported that IBM plans to invest an additional $100 million in its Watson technology. Earlier in 2011, Watson exceeded previously unmet expectations for artificial intelligence by easily overwhelming two Jeopardy!champions on national TV. While Watson-like technologies could be used in a variety of settings (e.g., network management or health care), the steep investments IBM has already made suggest that global services giant has its eye on a revenue stream whose major tributaries are large enterprises: Proctor and Gamble, Pfizer, ExxonMobil, JPMorgan Chase.

ArnoldIT’s April Holmes put it this way:

IBM has a Tundra truck stuffed with business intelligence, statistics, and analytics tools [SPSS, InfoSphere Streams and Cognos come to mind – ed.] IBM has no product. IBM . . . has an opportunity to charge big bucks to assemble these components into a system that makes customers wheeze, “No one ever got fired for buying IBM.”

Promising but out of reach? Few have been fired for asking, “Can we afford IBM?” In a recentTechnology Review interview, IBM Analytics head Chid Apte admitted that “This technology will form the basis of a new product we will in the future be able to offer all of IBM’s big customers.”

The reasons for the anticipated cost are readily apparent. It has been widely reported that Watson took four years to build, runs on around 2,800 Power7 processor cores, has 15 terabytes of main memory, can operate at 80 teraflops (80 trillion operations per second), and employs IBM’s SONAS file system with a capacity of 21 terabytes. Watson software components included some familiar open source technologies IBM had already adopted elsewhere, such as Eclipse and Apache Hadoop, but new ground was broken in creating a natural language understanding system tailored to perform in the Jeopardy! question and answer format. The cost for that capability alone was considerable.

IBM believes this revenue stream will be substantial. According to the Bloomberg article, IBM projects $16B from “business analytics and optimization.” This estimate is probably not unfounded. A 2011 IBM-sponsored study of 3,000 CIO’s reportedly found that 4 out of 5 executives indicated that applying analytics to IT operations was part of their “strategic growth plans.”

But what are the prospects for small and medium sized enterprises (SMB’s)? Large data warehouses are not only associated with large enterprises. Small firms – even a one-person consultancy — can easily amass huge quantities of data, and may be even more highly motivated to make sense of that data. However, they are unlikely to have Watson-scale budgets.

Still, there are a few possible scenarios in which Watson technology could reach SMB’s:

  • Cloud-based Watson resources, with cost reductions made possible by scale (a la Google search), could become more widely available
  • “Watson Light” — Restricted vocabularies and data sources, possibly sold through IBM partners
  • Bundling of certain Watson components with existing, more affordable IBM products
  • A la carte offerings, such as the CRM-integrated “Next Best Action” recommender systems envisioned by Forrester’s James Kobielus
  • Industry-specific offerings in which the raw Watson capabilities are harnessed behind the scenes by IBM specialists

The challenge of providing robust hardware and software capabilities to collect, host and access large scale data warehouses using Watson-like technologies is not a near term possibility for smaller enterprises. It should be remembered that existing natural language technologies, such as the highly effective speech recognition technology Microsoft seamlessly integrated into Vista and Windows 7,  have not been widely adopted, even though for many types of human-computer interactions, it is an efficient and easy to use technology. Other obstacles await earlier adopters: problems of data quality, provenance, standardization, consensus building for metadata, and dealing with special scalability problems such as DR and privacy concerns. Early adopters may rely on third party specialists to pull many of the levers.

Nevertheless, some steps can be taken by SMB’s to lay a foundation for the Watson Era.
  • Identify the most high-payoff opportunities, then refine enterprise-specific use cases to match
  • Develop canonical, standardized systems for metadata and taxonomies
  • Leverage existing standards while monitoring current work on evolving standards
  • Develop small, prototype projects using current technologies to assess where payoffs are likely to be for your organization (e.g., low cost experiments with Hadoop or similar technologies)
  • Include nontraditional sources, such as email, web traffic, internal and external documents and project management artifacts
  • Begin to address data quality and provenance by improving internal processes and assigning metrics (even if initially manual)
  • Plan for scaling out warehouses several orders of magnitude beyond current forecasts
  • Collaborate with other groups, especially within industry-specific subcommunities
  • Be on the lookout for template-based “blueprints” that work for industry-specific needs (e.g., subscription-based businesses with periodic renewals, or importers whose margins depend greatly upon shipping costs, etc.)
  • Through internal education, networking, consultants and recruitment, improve staff capabilities and awareness

Watson technologies are a force to be reckoned with. Just when they will make themselves felt in the marketplace is still guesswork, but savvy early adopters will likely seize opportunities that won’t be so easy to pluck later in the adoption curve.

Filed Under: Blog type, IBM Watson, Released, SMB Tagged With: KBS

  • About
    • CiteULike Snapshot
    • Privacy Policy
  • Sites
  • Blog
    • Privacy & Regulatory
    • Machine intelligence
      • IBM Watson
      • Natural Language Processing
  • Subscribe
  • Twitter
  • LinkedIn
  • Facebook
  • Slideshare

Copyright © 2023 • Knowlengr • Hosted by Krypton Brothers LLC • About • Contact