Search
Subscribe

Bookmark and Share

About this Blog

As enterprise supply chains and consumer demand chains have beome globalized, they continue to inefficiently share information “one-up/one-down”. Profound "bullwhip effects" in the chains cause managers to scramble with inventory shortages and consumers attempting to understand product recalls, especially food safety recalls. Add to this the increasing usage of personal mobile devices by managers and consumers seeking real-time information about products, materials and ingredient sources. The popularity of mobile devices with consumers is inexorably tugging at enterprise IT departments to shifting to apps and services. But both consumer and enterprise data is a proprietary asset that must be selectively shared to be efficiently shared.

About Steve Holcombe

Unless otherwise noted, all content on this company blog site is authored by Steve Holcombe as President & CEO of Pardalis, Inc. More profile information: View Steve Holcombe's profile on LinkedIn

Follow @WholeChainCom™ at each of its online locations:

Entries by Steve Holcombe (178)

Sunday
Jul212013

Beyond The Tipping Point: Interoperable Exchange of Provenance Data

Introduction

This is our fourth "tipping point" publication.

The first was The Tipping Point Has Arrived: Trust and Provenance in Web Communications. We highlighted there the significance of the roadmap laid out by the Wikidata Project - in conjunction with the W3C Provenance Working Group - to provide trust and provenance in its form of web communications. We were excited by proposals to granularize single facts, and to immutabilize the data elements to which those facts are linked. We opined this to be critical for trust and provenance in whole chain communications. But at that time, the Wikidata Project was still waiting on the W3C Provenance Working Group to establish the relevant standards. No longer is this the case.

The second post was The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications. We there emphasized the emerging market-based opportunities for information sharing between enterprises and consumers. We were particularly impressed with Google’s definition of "selective sharing" (made with GooglePlus in mind) to include controls for overcoming both over-sharing and fear of sharing by information producers. Our fourth post, below, includes a similar discussion but is this time skewed relative to the increasing needs for data interoperability among web-based file hosting services in the Cloud.

The third post - Why Google Must - And Will - Drive NextGen Social for Enterprises - introduced common point social networking which we defined as providing the means and functions for the creation and versioning of immutable data elements at a single location. Github was pointed to as a comparative but we proposed that Google would lead in introducing common point networking (or similar) with a roadmap of means and functions it was already backing in the Wikidata Project. We identified an inviting space for common point social networking between Google's Knowledge Graph and emerging GS1 (i.e., enterprise) standards for Key Data Elements (KDEs). We identified navigational search for selectively shared proprietary information (like provenance information) as a business model in support.

This fourth post posits the accessibility of data elements (like KDEs) from web-based data hosting services in the Cloud providing content-addressable storage. This is a particularly interesting approach in the wake of the recent revelations regarding PRISM, the controversial surveillance program of the U.S. National Security Agency. The NSA developed their Apache Accumulo NoSQL datastore based on Google's BigTable data model but with cell-level security controls. Ironically, those kind of controls allow for tagging of a data object with security labels for selective sharing. This kind of tagging of a data object within a data set represents a paradigm shift toward common point social networking (or the distributed social networking envisioned by Google's Camlistore as described below).

The PROV Ontology

The W3C Provenance Working Group published its PROV Ontology on 30 April 2013 in the form of "An Overview of the PROV Family of Documents". The PROV family of documents define "a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web."

The W3C recommends that a provenance framework should support these 8 characteristics:

  1. the core concepts of identifying an object, attributing the object to person or entity, and representing processing steps;
  2. accessing provenance-related information expressed in other standards;
  3. accessing provenance;
  4. the provenance of provenance;
  5. reproducibility;
  6. versioning;
  7. representing procedures; and
  8. representing derivation.

These 8 recommendations are more specifically addressed at 7.1 of the W3C Incubator Group Report 08 December 2010. In effect, the W3C Provenance Working Group has now established the relevant standards for exporting (or importing) trust and provenance information about the facts in Wikidata.

As we observed in our first tipping point, the Wikidata Project was first addressing the deposit by content providers of data elements (e.g., someone's birth date) at a single, fixed location for supporting the semantic relationships that Wikipedia users are seeking. The export of granularized provenance information about Wikidata facts was on their wish list. Now the framework for making that wish come true has been established. Again, the key aspect for us about the Wikidata Project is that it shouldn’t matter - from the standpoint of provenance - how the accessed data at that fixed location is exchanged or transported, whether via XML meta-data, JSON documents or other. But the fixing of the location for the granularized data provides a critical authentifying reference point within a provenance framework.

Interoperably Connecting Wikidata to Freebase/Knowledge Graph

On 14 June 2013 Shawn Simister, Knowledge Developer Relations for Google, offered the following to the Discussion list for the Wikidata project:

"Would the WikiData community be interested in linking together Wikidata pages with Freebase entities? I've proposed a new property to link the two datasets .... Freebase already has interwiki links for many entities so it wouldn't be too hard to automatically determine the corresponding Wikidata pages. This would allow people to mash up both datasets and cross-reference facts more easily."

Later on in the conversation thread with Maximilian Klein, Wikpedian in Residence, Simister also added "we currently extract a lot of data from [Wikidata Project] infoboxes and load that data into Freebase which eventually makes its way into the Knowledge Graph so [interoperably] linking the two datasets would make it easier for us to extract similar data from WikiData in the future."

See the discussion thread at http://lists.wikimedia.org/pipermail/wikidata-l/2013-June/002359.html

This is non-trivial conversation between agents of Google and Wikipedia about interoperably sharing and synchronizing data between two of the largest data sets in the world. But we believe that the marketplace for introducing provenance frameworks is to be found among the data sets of file hosting and storage services in the Cloud.

The Rise Of File Sharing In The Cloud

Check out the comparison of file hosting and storage services (with file sharing possibility) at http://en.wikipedia.org/wiki/Comparison_of_file_hosting_services. Identified file storage services include Google Drive, Dropbox, IBM SmartCloud Connections, DocLanding and others. All provide degrees of collaborative and distributed access to files stored in the Cloud. New and emerging services include allowing users to perform the same activities across all sorts of devices which will require similar sharing and synchronization of data across all devices. This has huge ramifications for not just the sharing of personal data in the Cloud, but also for the sharing of proprietary, enterprise data. And when one is talking about proprietary information, one has to consider the introduction of a provenance framework.

The Next Logical Step: Content-addressable Storage In the Cloud

"Content-addressable storage ... is a mechanism for storing information that can be retrieved based on its content, not its storage location. It is typically used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations." Github is an example that we mentioned in our third tipping point blog. CCNx (discussed a little later) is another example. Camlistore, a Google 20 Percent Time Project, while still in its infancy, is yet another example.

"Camlistore can represent both immutable information (like snapshots of file system trees), but can also represent mutable information. Mutable information is represented by storing immutable, timestamped, GPG-signed blobs representing a mutation request. The current state of an object is just the application of all mutation blobs up until that point in time. Thus all history is recorded and you can look at an object as it existed at any point in time, just by ignoring mutations after a certain point."

Camilstore has only revealed they are handling sharing similarly to Github. But NSA's Apache Accumulo - and a spinoff called Sqrrl - may currently be the only NoSQL solutions with cell-level security controls. Sqrrl, a startup that is comprised of former members of the NSA Apache Accumulo team, is commercially providing an extended version of Apache Accumulo with cell-level security features. Co-founder Ely Kahn of Sqrrl says, "We're essentially creating a premium grade version of Accumulo with additional analysis, data ingest, and security features, and taking a lot of the lessons learned from working in large environments." We suspect that Camlistore is similarly using security tags (though we can’t say for sure because it is a newly emerged feature not covered in Camlistore's documentation). Camlistore calls what is doing as decentralized social networking. This kind of activity (and more, below) gives us increasing reason to expect that content-addressable products will arise to break the silos of supply chains.

Surveying the Field

Global trade is a series of discrete transactions between buyers and sellers. It is generally difficult – if not impossible – to determine a clear picture of the entire lifecycle of products. The proprietary data assets (including provenance data) of enterprises large and small have commonly not been shared for two essential reasons:

  1. the lack of tools for selective sharing, and
  2. the fear of sharing offered under "all or nothing" social transparency sharing models.

We believe that with the introduction of content-addressable storage in the Cloud there will occur a paradigm shift toward the availability of tools for selective sharing among people and their devices. In that context it would be interesting to see the new activities and efforts of Wikidata, Github, Camlistore and Sqrrl connected with the already existing activities and efforts made in supply chains.

Ward Cunningham, inventor of the wiki, formerly Nike’s first Code for a Better World Fellow, and now Staff Engineer at New Relic, has innovated a paragraph level wiki for curating sustainability data. Ward shows how data and visualization plugins could serve the needs of organizations sharing material sustainability data. Cunningham’s visualization, below, once paired with content-addressable storage, simplifies greater authenticity and relevancy within the enterprise.

Leonardo Bonanni started SourceMap as his Ph.D. thesis project at the MIT Media Lab. Sourcemap’s inspiring visualizations are crowdsourced from all over the world to transparently show "where stuff comes from". Again, SourceMap, when paired with content-addressable storage, simplifies selective sharing in the cloud.

There is an evolution of innovation spurring new domain specific solutions. We've put together the following table to emphasize the technologies that we find of interest. The table represents a progression toward solving under-sharing in supply chains with content-addressable storage in the Cloud. Entities in the table below are not specially focused on supply chains and but some are thinking about supply chains. No matter. We are attempting to join the dots looking forward based on a progression of granular sharing technologies (i.e., revision control, named data networking, informational objects).

product

wikidata

git

sqrrl

CCNx

smallest federated wiki

pardalis

camlistore

creator and or sponsor  notes

google, paul allen,

gordon moore

git by linus torvalds

extends nsa security

design to enterprise

parc, named data networking

wiki inventor ward cunningham

holcombe, boulton, whole chain traceability consortium

google

20 pct

project

user experience

public human machine editable

revision control, social coding

software

tagged security labels

federates

content efficiently avoiding congestion

wiki like git

paragraph  forking

sharing, traceability, provenance

personal data storage system for life

selective sharing

hub data source

public / private

authorization systems

public by default

merging like github

hub data access

private by default

content addressable

storage

xml api

32-bit unix hashing

cell level security

content verifiable from the data itself

paragraph blobs

CCNx like informational objects

github-like json blobs, no meta data

database

mysql

above storage

apache accumulo

nosql

above storage

couchdb, leveldb

above storage

sqlite, mongodb, mysql, postgres, (appengine)

 

We'd like to take this opportunity to make special mention of the CCNx (Content-centric Networking) Project at PARC begun by Van Jacobson. The first CCNxCon of the CNNx Project is what brought us - Holcombe and Boulton - together. We were both fascinated with prospects of applying CCNx to the length of enterprise supply chains. In fact, the first ever "whole chain traceability" funding from the USDA came in 2011 in no small part because author Holcombe - as the catalyzer of the Whole Chain Traceability Consortium - proposed to extend Pardalis'  engineered layer of granular access controls using a content-centric networking data framework. It was successfully proposed to the USDA that the primary benefit of CCNx lay in its ability to retrieve data objects based on what the user wanted, instead of where the data was located. We perceive this even now to be critical to smoothing out the ridges of the "bullwhip effect" in supply chains.

Interoperability, Content-addressable storage and Provenance

Stonebraker et al. postulated the "The End of an Architectural Era (It's Time for a Complete Rewrite)" in 2007 by stating the hypothesis that traditional database management systems are no longer able to represent the holistic answer with respect to the variety of different requirements. Specialized systems will emerge for specific problems. The NoSQL movement made this happen with databases. Google (and Amazon) inspired the NoSQL movement underpinning Cloud storage databases. Nearly all enterprise application startups are now using NoSQL to build datastores at scale to the exclusion of SQL databases. The Oracle/Salesforce/Microsoft partnership announcement in late June, 2013, is well framed by the rise of NoSQL, too. Now we are seeing this begin to happen with an introduced layer of content-addressable storage leading to interoperable provenance.

In our third tipping point blog, we opined that Google must drive nextgen social for enterprises to overcome the bullwhip effects of supply chains. Google has been laying a solid foundation for doing so by co-funding the Wikidata Project, proposing the integration of Wikidata and Knowledge Graph/Freebase, nurturing navigational search as a business model, and gaining keen insights into selective sharing with Google Plus (and the defunct Google Affiliate Network). Call it common point social networking. Call it decentralized social networking. Call it whatever you want to. The tide is rising toward "the inter-operable interchange of provenance information in heterogeneous environments."

Is the siloed internet ever to be cracked? Well, it is safe to say that the NSA has already cracked the data silos of U.S. security agencies with its version of NoSQL Apache Accumulo (again, based on Google's BigData table model). Whatever your feelings or political views about surveillance by the NSA (or Google), it is an historical achievement. Now, outside of government surveillance programs, web-based file sharing is just beginning to shift toward content-addressable data storage and sharing in the Cloud. That holds forth tremendous promise for cracking the silos holding both consumer and enterprise data. There are very interesting opportunities for establishing "first mover" expectations in the marketplace for content-addressable access controls in the Cloud.

The future is here. It is an interoperable future. It is a content-addressable future. And when information needs to be selectively shared (to be shared at all), the future is also about the interoperable exchange of provenance data. Carpe diem

_______________________________

Authors: 

Steve Holcombe
Pardalis Inc.

 

 

Clive Boulton
Independent Product Designer for the Enterprise Cloud
LinkedIn Profile

 

Saturday
Jan192013

Whole Chain Traceability: A Successful Research Funding Strategy - Part II

Return to Part I

Preface

I published Part I in January 2012. Since then Oklahoma State University has announced that funding has been provided toward the formation of a National Institute for Whole Chain Traceability and Food Safety. The principal investigators are Dr. Michael Buser, Biosystems & Agricultural Engineering and Dr. Brian Adam, Agricultural Economics. This post is to correct an omission in attribution to work product provided by Pardalis Inc. which was has played (and apparently continues to play) a critical role in OSU's intended formation of the National Institute.

A Handshake Agreement

In late 2009 I approached Oklahoma State University (OSU) researchers. We (that is, the OSU researchers and Pardalis) verbally agreed to go after governmental and private "food traceability" funding which would be used to employ Pardalis' database system - engineered from Pardalis’ patents - by OSU.[1] That database system was physically taken into possession by OSU behind their firewall in January 2010. Concurrently, Pardalis introduced OSU researchers to its informal “traceability consortium” of researchers at North Dakota State University and Michigan State University, and to John Bailey at Top 10 Produce LLC, Salinas, California. Together these institutions and companies submitted a $4 million (for use over a projected 5 years) application under the USDA NIFA Specialty Crops Research Initiative (SCRI) for a coordinated agricultural project entitled A Real–Time, Item Level, Stakeholder Driven Traceability System for Fresh Produce. My work product contribution (i.e., Pardalis’ contribution) for this first funding submission is found in Objective 2: Development of a Data Repository Traceability System.[2] That work product includes a high-level, diagrammatic comparison of the product traceability initiative's one-up, one down approach to information sharing with an approach more akin to the sharing of information found within social media (using Pardalis' database system).

Prior work product of Pardalis’ that was drawn upon by me for developing Objective 2 may be found in Laying the First Plank of a Supply Chain Ownership Web in North Dakota (September 2008), International Symposium on Certification and Traceability for Food Safety and Quality (Oct 2007),  Banking on Granular Information Ownership (April 2007) and A New Information Marketplace for the Beef Industry (March 2004).

While we were waiting to hear back on the SCRI submission the opportunity to file for Oklahoma EDGE Funding arose. But what to apply for? I proposed that we assume that the SCRI funding would be successful, and that we seek additional funds for establishing a product center from the Oklahoma EDGE Fund submission. We did that and the $3.1M submission (co-authored and co-signed by me) was entitled Leveraging USDA Specialty Crops Funding to Lay the First Foundational (sic) of a Conceptual OSU Agricultural Product Traceability Center.

"The investment of EDGE funds is not necessary for the long-term success of the Pardalis/OSU collaboration; it will accelerate the success. When it comes to information technology, Oklahoma is universally regarded as a "fly-over" state. The investment of EDGE funds are essential in providing Oklahoma with the best opportunity to (a) rapidly expand the number of researchers, technicians, and support services in the area of food safety within the State of Oklahoma, (b) rapidly grow an existing, home-grown advanced technology company in Oklahoma [i.e., Pardalis Inc.], and (c) accelerate the development and deployment of an Agricultural Product Traceability Center at OSU that will be well positioned to attract $100s of millions in federal research grants and/or privately funded research."

Neither the SCRI submission nor the Oklahoma EDGE submission were successful. No reviewers' comments of any value were provided by the EDGE fund. But the USDA’s reviewers’ comments were valuable:

"The proposed project addresses a major concern of the produce industry, traceability of fresh produce. The project focuses on traceability at the item level, which is on the wish list of the industry. The proposal to explore social media systems could be used to trace items through complex supply chains .... The proposal needs to be heavily reorganized and a clear, well defined plan for the proposed research developed."[3]

Introducing and Defining "Whole Chain Traceability"

The failures of the prior funding submissions did not deter the OSU researchers and me. In the summer of 2010 we went after two significant USDA Agricultural and Food Research Initiative (AFRI) submissions, both providing an opportunity for funding of up to $25M over 5 years. This is where the contributions to the funding submissions made by me became more distinct and modular. "Food traceability" needed to be more clearly defined in our funding submissions to increase the chances of the success we had so far been missing. That's when I created and contributed new work product.[4] See Pardalis’ Facilities, Equipment and Resources letter that was attached to the AFRI STEC submission entitled Stakeholder-driven food supply safety system for a real-time detection, risk assessment, and mitigation of Shiga toxin-producing Escherichia coli [STEC] in beef products. In the letter you’ll find the work product where I for the first time introduced and defined "whole chain traceability" by extending the CTIDs envisioned by the IFT/FDA to the more granular CTIDs envisioned by Pardalis' patents. Here's an excerpt:

"A useful explanation of the benefits of a "whole chain" produce traceability system may be made with critical traceability identifiers (CTIDs), critical tracking events (CTEs) and Nodes.[5]. Critical tracking events (CTE) are those events that must be recorded in order to allow for effective traceability of products in the supply chain. A Node refers to a point in the supply chain when an item is produced, process, shipped or sold. CTE’s can be loosely defined as a transaction. Every transaction involves a process that can be separated into a beginning, middle and end.

While important and relevant data may exist in any of the phases of a CTE transaction, the entire transaction may be uniquely identified and referenced by a code referred to as a critical tracking identifier (CTID). Now, with the emergence of biosensor development for the real-time detection of foodborne contamination, one may also envision adding associated real-time environmental sampling data from each node. The challenge is in using even top of the line "one up/one down" product traceability systems (compare CTID2 in the foregoing drawing with CTID2 in the next drawing) that, notwithstanding the use of a single CTID, are inherently limiting in the data sharing options provided to both stakeholders and government regulators. With the proposed stakeholder-driven "Whole Chain" product traceability system, in which CTID2 is essentially assigned down to the datum level, transactional and environmental sampling data may in real-time be granularly placed into the hands of supply chain partners, food safety regulators, or even retail customers.

This is a vision of "whole chain" chain (sic) sharing that goes well beyond "one up/one down" information sharing, and recognizes the need for control or "data ownership" by each [of the] stakeholders."

A similar “facilities, equipment and resources” letter from Pardalis was attached to the AFRI Norovirus submission entitled Stakeholder driven food supply safety system for a real-time detection, risk assessment, and mitigation of [norovirus] food borne contamination. And the work product from both of these letters was clearly incorporated into the all important narrative for each AFRI submission, respectively. When you do your diligence, ask to see the content in each AFRI submission under Objective 5. Development of Stakeholder-driven Traceability System. In particular ask to see Phase 2 to Objective 5 entitled Technical development and deployment of “whole chain” product traceability system. Dr. Michael Buser has copies of those submissions as he was a lead principal investigator for all of the funding submissions beginning with the SCRI submission, above.

We filed the two AFRI submissions in August and September of 2010, respectively. We found out later (in 2011) that neither of these AFRI submissions would be successful. But we felt very strongly that we were on the right track. Consequently, in October 2010 this work product is also described in detail in my technology transfer proposal to Dr. Clarence Watson, then the Associate Director of the Oklahoma Agricultural Experiment Station, entitled Proposing a National Agricultural Product Traceability Center at Oklahoma State University.[6]

Pardalis' work product relative to "whole chain traceability" was also described in detail in The Bullwhip Effect series I authored and published beginning in January 2011.

The graphical representations describing whole chain traceability (or derivatives of them) are included in a series of presentations in 2011 (made with the full knowledge and participation of OSU researchers) to companies and institutions like Syngenta [presentation], the Oklahoma State Department of Health [presentation] (with a personal introduction by Senator James Halligan), Nestle’ (when I initiated contact with Dr. Sam Saguy), Gartner (when I initiated contact with Vladimir Krasojevic, then a supply chain Research Director), GS1 (when I initiated contact with Stephen Arens, then Director of Strategic Partnerships), the OSU Food and Agricultural Products Center [presentation] (at the invitation of Director Roy Escoubas), the Association of Overseas Chinese Agricultural, Biological, and Food Engineers [presentation], the EU-funded SmartAgriFood Project (when I made contact with Dr. Sjaak Wolfert), and others.

Pardalis' work product is again described in detail in an the internal OSU provost proposal submitted in June 2011 entitled A Content-Centric Food Traceability System. While the provost proposal was unsuccessful, it is essentially a "lite" version of the USDA NIFSI proposal successfully made (finally) as described in Part I. As with the AFRI submissions, above, another Pardalis' Facilities, Resources and Equipment letter was attached to the ultimately successful USDA NIFSI submission. When you are doing your due diligence, ask to see the narrative to the USDA NIFSI submission under Objective 1. Develop a working, scalable stakeholder-driven “whole chain” agricultural commodity traceability system.[7] The work product from Pardalis' letter is clearly incorporated into the narrative of the USDA NIFSI submission. 

The same work product is included in a composite fashion in Slide 10 of my August 2011 slide presentation, A New Way of Looking at Information Sharing in Supply & Demand Chains. It was included in the technology transfer proposal I made in November 2011 to Dr. Stephen McKeever, the OSU Vice President for Research and Technology Transfer, entitled Proposing Expansion of the National Institute for Whole Chain Traceability & Food Safety Research.[8]

There's more but I'll stop there.

2012

For two years permission was granted by me for OSU to use Pardalis’ work product on a handshake. The handshake signified that with funding the technology transfer issues would be dealt with in a win/win relationship between OSU and Pardalis. That win/win attitude was well represented in the article, Tracking and Incentivizing Data Sharing: The Whole Chain Traceability Consortium, posted by Food+Tech Connect in November 2011. But in January 2012 the hand shake agreement ended. Pardalis' database system was returned.

In April 2012 Dr. McKeever responded to an email message of mine. I followed-up with this question:

"Is work product and/or copyrighted material contributed by me and/or Pardalis continuing to be used by OSU without permission or compensation to build upon the success of the narrative of the funded NIFSI project in funding submissions made subsequent to [the termination of the handshake agreement]?"

I have to-date received no reply from Dr. McKeever. I’m not attaching that communication or linking to it here but it is time-stamped Sun, April 22, 2012 4:00 pm to Dr. McKeever’s OSU email address. In doing your due diligence, maybe Dr. McKeever will share a copy with you.

In June 2012 OSU announced that it was providing funding toward the formation of a National Institute for Whole Chain Traceability and Food Safety. This announcement included no attribution of any of the efforts or work products provided by Pardalis. Furthermore, the criticality of Pardalis’ work product to the proposed formation of the National Institute was at least in one instance in 2012 included in a promotional presentation made by one of the principal investigators of the National Institute. See the Slides 12 and 13 of the presentation by Dr. Brian Adam (entitled Whole-Chain Traceability – Information Sharing from Farm to Fork and Back Again) which was apparently shown in June 2012 to the National Value Added Conference in Traverse City, Michigan. Again, the inclusion of these slides was made without attribution to, or permission by, Pardalis. In fact those two slides used by Dr. Adam were personally prepared by me in 2011 and have been either untouched or barely retouched. The essence of Pardalis’ work product in extending the CTIDs envisioned by the IFT/FDA to the more granular CTIDs envisioned by Pardalis' patents clearly remains.

Conclusion

Whatever the reasoning or justification by OSU for the omission, the point of this blog post is to correctly attribute the proposed founding of the National Institute to Pardalis’ work product. No permission or license was given by Pardalis to OSU that would justify OSU behaving as if "whole chain traceability" had never been introduced and defined by me in 2010 for our research strategy. Moreover, there was no "work for hire" agreement. And following the termination of the handshake agreement there was no permission or license given to OSU to continue to use Pardalis' work product as if it were OSU's work product. 

Due diligence advice

It is clear that the roots of the intended National Institute run deep into Pardalis’ work product but that this contribution has so far gone without attribution. And it is clear that OSU continued to use Pardalis’ work product without attribution in 2012 to promote the founding of the National Institute. If you are a researcher or company who is involved in activities connected with the National Institute, or you are considering becoming so involved, do your own due diligence investigation as to the behavior of OSU in using Pardalis’ work product without attribution. 

What about the clear connection between Pardalis' work product and Pardalis' patents? Is OSU infringing the methods of Pardalis’ patents? Attribution doesn't solve this dilemma. The burden is on OSU to show that it is not infringing. If it is using Pardalis' patents then a direct, express license is required. If you are considering licensing or using any technologies that come out of (or are provided by) the National Institute relating to the networking of immutable informational objects then, again, do your due diligence research.

Closure

At my solicitation Dr. Sam Saguy of the Hebrew University of Jerusalem - along with several others[9] - graciously gave an early letter of support to the formation of the National Institute. Dr. Saguy said something that has stayed with me:

"It is worth noting that the ... founding of the Institute is truly a milestone made possible by the activities of the Whole Chain Traceability Consortium (WCTC). Unfunded, multi-institutional activities do not commonly coalesce and stay together for as long as they have with the WCTC. Keeping such a group together in advance of funding is no small challenge. The WCTC participants at Oklahoma State University, North Dakota State University, Michigan State University, the University of Arkansas, and Pardalis, Inc. have [implemented what I describe as] “sharing-is-winning” principles. Only through this open and mutual interests and sharing of resources most future hurdles will be overcome with fruitful outcome."[10]

Thank you, Dr. Saguy. I could not agree more

____________________
Endnotes

  1. This was the same database system that had been commercially deployed in late 2005 to a Texas livestock market. For more information see The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications (July 2012).
  2. Background information. Not linked to or otherwise included here. If need be, you can ask Dr. Michael Buser at OSU for a copy of the SCRI submission.
  3. Email message to Dr. Deland J Myers, School of Food Systems, North Dakota State University, time-dated Tue, June 1, 2010 11:14 am from USDA\NIFA regarding Specialty Crop Research Initiative, Proposal Number: 2010-01167, Proposal Title: A Real-Time, Item Level, Stakeholder Driven Traceability System for Fresh Produce.
  4. It was at this time that I also personally solicited the participation of researchers at the University of Arkansas in the quest for funding by the “traceability consortium” then comprised at its core of Oklahoma State University, North Dakota State University, Michigan State University and Pardalis, Inc.
  5. The following picture was heavily influenced by Appendix I of the IFT/FDA Traceability in Food Systems Report, Vol. 1.
  6. See p. 4.
  7. You might also inquire as to whether the content in Objective 1 has been modified or updated from January 2012 forward to completely remove reference to Pardalis’ work product. See also the "due diligence" advice, below.
  8. See p. 3. By the way, proposing "expansion" of the National Institute was actually part and parcel of asking for more formal recognition of the National Institute by the OSU administration such as that given by OSU when it made its announcement in June 2012.
  9. See the updates to The Whole Chain Traceability Consortium (November 2011). Some or all of those letters - including Dr. Saguy's - may have been withdrawn or not used by OSU following the termination of the handshake between Pardalis and OSU.
  10. Letter dated 9 Oct 2011 to Dr. Brian Adam. Hyperlinked added.

 

 

Monday
Jan072013

The Roots of Common Point Authoring (CPA)

Common Point Authoring (CPA) is timely and relevant for amerliorating the fear factors revolving around data ownership. Those fears are multiplying from the every increasing usage of unique identification on the Internet as applied to both people (e.g., social security numbers) and products (e.g., unique electronic product numbers and RFID tags).

Q&A: What is an informational object?

Consider the electronic form of this document (the one you are reading right now) as an example of a informational object. Imagine that you are the author and owner of this informational object. Imagine that each paragraph of this object has a granular on/off switch that you control. Imagine being able to granularly control who sees which paragraph even as your informational object is electronically shared one-step, two-steps, three-steps, etc., down a supply chain with people or businesses you have never even heard of. Now further imagine being able to control the access to individual data elements within each of those paragraphs.

The methods for CPA were first envisioned in regards to transforming the authoring of paper-based material safety data sheets (MSDSs) in the chemical industry into a market-driven, electronic service provided by chemical manufacturers for their supply chain customers. You may think of MSDSs as a type of chemical pedigree document authored by chemical manufacturers and then handed down a multi-party supply chain as it follows the trading of the chemical.

At the time, we crunched some numbers and found that MSDSs offered as a globally accessible software service could be provided to downstream users for significantly less than what it cost them to handle paper MSDSs. But we further recognized that our business model for global software services wouldn’t work very well unless the fear factors revolving around MSDSs offered as a service were technologically addressed.

That is, we asked the question, “How can electronic information be granularly controlled by the original author (i.e., creator) as it is shared down a supply chain?”

When it comes to information sharing in multi-tenancies, the prior art (i.e., the prior patents and other published materials) to CPA at best refers to collaborative document editing systems where multiple parties share in the authoring of a single document. A good example of the prior art is found in a 1993 Xerox patent entitled 'Updating local copy of shared data in a collaborative system' (US Patent 5,220,657 - Xerox) covering:

“A multi-user collaborative system in which the contents as well as the current status of other user activity of a shared structured data object representing one or more related structured data objects in the form of data entries can be concurrently accessed by different users respectively at different workstations connected to a common link.”

By contrast, CPA's methods provide for the selective sharing of informational objects (and their respective data elements) without the necessity of any collaboration. More specifically, CPA provides the foundational methods for the creation and versioning of immutable data elements at a single location by an end-user (or a machine). Those data elements are accessible, linkable and otherwise usable with meta-data authorizations. This is especially important when it comes to overcoming the fear factors to the sharing of enterprise data, or allowing for the semantic search of enterprise data. To the right is a representation from Pardalis' parent patent, "Informational object authoring and distribution system" (US Patent 6,671,696), of a granular, author-controlled, structured informational object around which CPA's methods revolve.

That is, the critical means and functions of the Common Point Authoring™ system provide for user-centric authoring and registration of radically identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership'.

When it comes to "electronic rights and transaction management", CPA's methods have further been distinguished from a significant patent held by Intertrust Technologies. See Methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information (US Patent 7,092,914 - Intertrust Technologies). By the way, in a 2004 announcement Microsoft Corp. agreed to take a comprehensive license to InterTrust's patent portfolio for a one-time payment of $440 million.

CPA's methods have been further distinguished worldwide from object-oriented, runtime efficiency IP held by these leaders in back-end, enterprise application integration: Method and system for network marshalling of interface pointers for remote procedure calls (US Patent 5,511,197 - Microsoft), Reuse of immutable objects during object creation (US Patent 6,438,560 - IBM), Method and software for processing data objects in business applications (US Patent 7,225,302 - SAP), and Method and system to protect electronic data objects from unauthorized access (US Patent 7,761,382 - Siemens).

For more information, see Pardalis' Global IP.

Friday
Jan042013

Why Google Must - And Will - Drive NextGen Social for Enterprises

Preface

This is our third "tipping point" publication.

The first was The Tipping Point Has Arrived: Trust and Provenance in Web Communications. We highlighted there the significance of the roadmap laid out by the Wikidata Project. It was our opinion that:

"[a]s the Wikidata Project begins to provide trust and provenance in its form of web communications, they will not just be granularizing single facts but also immutabilizing the data elements to which those facts are linked so that even the content providers of those data elements cannot change them. This is critical for trust and provenance in whole chain communications between supply chain participants who have never directly interacted."

The second post was The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications. We there emphasized the emerging market-based opportunities for information sharing between enterprises and consumers:

"We know this is a big idea but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information."

Introducing Common Point Social Networking

For the purposes of this post we introduce and define Common Point Social Networking:

Common point social networking provides the means and functions for the creation and versioning of immutable data elements at a single location by an end-user or a machine which data elements are accessible, linkable and otherwise usable with meta-data authorizations.

The software developers reading this post may recognize similarities with Github. Github is perhaps the canonical proxy for fixed, common point sharing adoption. Software developers publish open source software development projects, providing source code distribution and means for others to contribute changes to the source code back to a common repository. Version control provides a code level audit trail.

In July 2012 Github took a $100M venture capital investment from Andreessen Horowitz. There’s no doubt that some of this funding will be used by Github to compete in the enterprise space. But we further offer here that Google is better positioned to lead the current providers of enterprise software and cloud services in introducing a new generation of online social networks in the fertile ground between enterprises and consumers. We propose that Google so lead by introducing and/or further encouraging a roadmap of means and functions it is already backing in the Wikidata Project. We have identified an inviting space for common point social networking to serve as a bridge between Google's Knowledge Graph and the emerging GS1 standards for Key Data Elements (KDEs). 

A Sea Change in Understanding

In 2012 there was a sea change in understanding that greater access to proprietary enterprise data is necessary for creating new business models between enterprises and consumers. Yet there remains confusion on how to do so. There is much rhetorical cross-over these days between the social networking of "personal data" and "enterprise data" but enterprise data is - and will long remain - different from personal data. Again, in our opinion, enterprise data is overwhelmingly a proprietary asset that must be selectively accessed at a granular level from a fixed, common point to have any chance of being efficiently shared.

GS1 and Whole Chain Traceability

From 2010 through 2011, Pardalis Inc. catalyzed a successful research funding strategy in a series of “whole chain traceability” funding submissions seeking to employ the use of granular, immutable data elements in networked communications.[1] The computer networking aspects of this food supply chain research was based upon a granularization of critical tracking events (CTEs) with a high-level derivation of Pardalis’ patented processes for registering immutable data elements and their informational objects at a fixed location with meta-data authorizations. See Whole Chain Traceability: A Successful Research Funding Strategy. At the solicitation of co-author Holcombe, GS1 gave an early letter of support to this process, and GS1 was subsequently kept “in the loop”, too. This successful research funding strategy has from all appearances subsequently been given a favorable nod by GS1 in one of its recent publications, Achieving Whole Chain Traceability in the U.S. Food Supply Chain - How GS1 Standards make it possible. Here’s an excerpt -

"To achieve whole chain traceability, trading partners must be able to link products with locations and times through the supply chain. For this purpose, the work led by the Institute of Food Technologists described two foundational concepts: Critical Tracking Events (CTEs) and Key Data Elements (KDEs). With GS1 Standards as a foundation, communicating CTEs and KDEs is achievable."

So who is GS1, you ask? GS1 is "the international not-for-profit association dedicated to the development and implementation of global standards and solutions to improve the efficiency and visibility of supply and demand chains globally and across multiple sectors." You know that unique barcode symbology you see on the products you purchase? That barcode is standardized by GS1 and may include KDEs.

We applaud the introduction of KDEs by GS1. The inclusion of KDEs is a necessary step for moving beyond the lugubrious one-up/one-down information sharing that is overwhelmingly prevalent in today’s enterprise supply chains. Enterprises have long been comfortable with one-up/one-down pushing generic products down the chain. But it is a mode of information sharing that doesn’t fit well at all into today’s consumer demand chains that desire to pull real-time, trustworthy information. Furthermore, one-up/one-down information sharing significantly contributes to the "bullwhip effect" within supply chains that cost enterprises in a number of ways as explained in more detail in The Bullwhip Effect:

"The challenge is not one of fixing the latest privacy control issue that Facebook presents to us. Nor is the challenge fixed with an application programming interface for integrating Salesforce.com with Facebook. The challenge is in providing the software, tools and functionalities for the discovery in real-time of proprietary supply chain data that can save people's lives and, concurrently, in attracting the input of exponentially more valuable information by consumers about their personal experiences with food products (or products in general, for that matter) …."

But KDEs by themselves will not necessarily rid supply chains of the bullwhip effect. Without implementing a more social, fluid nature to the sharing of information in supply chains, KDEs may even increase the brittleness of one-up/one-down information sharing between database administrators, just more granularly so with "digital sand". For instance, industry standards for granular XML objects may be a bane … or a bon. It largely depends on the effectiveness of hierarchical administrative decision-making processes overseeing each data silo. Common point social networking holds forth a promise for implementing KDEs in a manner that overcomes the bullwhip effect.

But even with the most efficient and effective management processes, it is almost unimaginable to us that the first movement toward enterprise-consumer social networking will come from incumbent enterprise software systems. Sure, the first movement could potentially come from that direction, but we’ve just had too many experiences with enterprises and software vendors to put much faith in that actually happening. Conversely, we can much more easily imagine a first movement toward nextgen social from the "navigational search" demands of consumers. In our second tipping point blog we illustrated this point in some respect with Google Affiliate Networks. This time we are making our point with Google’s Knowledge Graph.

Navigational Search As A Business Model

Google's Knowledge Graph was announced this year as having being added to Google's search engine. Knowledge Graph is a semantic search system. Of course it’s not the only semantic search system. Bing incorporates semantic search. So do Ask.com and Wolfram Alpha. Siri provides a natural language user interface. But no matter what the semantic search engine, the search results are revealed as a list of ranked, relevant “answers” (or perhaps no answer at all because there isn’t one to give). Searching for real answers in real-time is still kind of a navigational mess either in commission or omission.

"For the semantic web to reach its full potential in the cloud, it must have access to more than just publicly available data sources. It must find a gateway into the closely-held, confidential and classified information that people consider to be their identity, that participants to complex supply chains consider to be confidential, and that governments classify as secret. Only with the empowerment of technological ‘data ownership’ in the hands of people, businesses, and governments will the Semantic Cloud make contact with a horizon of new, ‘blue ocean’ data." Cloud Computing: Billowing Toward Data Ownership - Part II.

Knowledge Graph is a "baby step" toward navigational search that provides a kind of Wikipedia "look and feel" experience designed to help users navigate more easily toward specific answers. Ever used the "I’m Feeling Lucky" button provided by Google? That button taps into Google's semantic search system to provide a navigational search resulting in a single result. This is an attempt to provide a purposeful effect instead of an exploratory effect to your search request. Yes, it's still a "hit or miss" artifice but - make no mistake - it is has been introduced for pushing forward navigational search as a business model.  Google's business intent for navigational search is to discourage you from going to other search engines for your search needs. Knowledge Graph is designed to cut short a process of discovery which may take you away from Google to a competitive search engine. This move toward navigational search is exactly why we are proposing that now is the time for common point social networking. Without common point social networking, navigational search will largely remain a clever, albeit unsatisfactory, solution for what consumers really want. What consumers want is real-time, meaningful, trustworthy information about the products they buy or are interested in buying. As Amit Singhal, Senior Vice-President of Engineering at Google, says:

"We’re proud of our first baby step - the Knowledge Graph - which will enable us to make search more intelligent, moving us closer to the "Star Trek computer" that I've always dreamt of building. Enjoy your lifelong journey of discovery, made easier by Google Search, so you can spend less time searching and more time doing what you love."

Conclusion: Whole Chain Communications from Navigational Search

So much of the information that consumers desire about the products they buy - or may buy - is currently locked up in enterprise data silos. But the realistic prospects for common point social networking means that navigationally searching for enterprise data - as a business model - is no longer an impossible challenge akin to Starfleet Academy's Kobayashi Maru. The ultimate goal for Google's navigational search is essentially that of providing not just whole chain traceability but real-time, whole chain communications for consumers via their mobile devices. The ultimate goal for GS1's standards for granular whole chain traceability is to similarly provide opportunities for real-time, navigational search.

 Google’s Knowledge Graph indeed represents the first step of a toddler. To fully develop a “Star Trek Enterprise computer” Google must drive nextgen social for enterprises by fostering the placement of common point social networking between the the bookends of navigational search and whole chain traceability. There is no other technology company better positioned or more highly motivated to do so. And we believe that it will. In backing the Wikidata Project, Google is already on a pathway to promoting common point social.

_______________________________

Authors:

 

Steve Holcombe
Pardalis Inc.

 

 

Clive Boulton
Independent Product Designer for the Enterprise Cloud
LinkedIn Profile

 

_______________

Endnotes
1. In these funding submissions co-author Holcombe introduced and defined the phrase of "whole chain traceability" in reference to his company's patents.
Sunday
Dec092012

Membership policy announcement for the DOITCloud networking group

I initiated the Data Ownership in the Cloud™ (DOITCloud™) LinkedIn networking group in April 2009. Since that time it has grown to about 1,000 members. There have been some fantastic discussions, especially early on in 2009 and 2010. The membership has continued to grow since then but the long-threaded, multi-party discussions (see, e.g., Top Twelve Discussions: DOITCloud) have essentially ceased. Other comparatively similar forums have popped up in LinkedIn or elsewhere to provide a place for people to voice similar opinions and concerns. That's a good thing.

I come from a legal background with marketplace experiences regarding the sharing of "enterprise data" in fragmented supply chains. The Data Ownership in the Cloud group was generally begun by me to (a) learn more about the "personal data" space, and (b) find "birds of a feather". Both "data ownership" and "the Cloud" are amorphous terms by themselves. Even more so when stitched together. But I suspect that each DOITCloud member has at least a visceral feeling that the internet should be providing more choices to people and their "personal data". Or something like that. In any case, many of DOITCloud have become directly connected to me on LinkedIn. Thanks! I am so glad our paths crossed.

Again, the long-threaded discussions have ended though I almost every day post a discussion (almost always a link to blog posts or articles). I'm not posting those discussions with much of an expectation that a multi-party discussion will be sparked. I am using the group now mostly as a resource for cataloging relevant content. And that serves an important purpose for me that I am pleased to continue sharing with you.

However, I will soon be instituting a policy of a requiring a direct LinkedIn (LI) connection to me for membership in the DOITCloud group. If you are already directly connected to me then there's nothing else to do. If you are not yet directly connected to me on LI, and desire to remain a member of DOITCloud, then please send me an LI invitation to directly connect.

The content posted to the DOITCloud group will remain the same. I would characterize the content posted here at DOITCloud as mostly applicable to "personal data". I am also posting content relevant to "enterprise data" at another LI networking group I formed earlier this year called the @WholeChainCom™ networking group.

There is much rhetorical cross-over these days between "personal data" and "enterprise data" but enterprise data is - and in my opinion will long remain - different from personal data. Enterprise data is a proprietary asset that must be selectively shared to be efficiently shared. Greater trust and provenance in supply chains requires fixing (i.e., immutablizing) data elements at single locations with meta-data authorizations. (Want to know more?). So for the foreseeable future it makes sense that DOITCloud™ (addressing the sharing of personal data) and @WholeChainCom™ (addressing the sharing of enterprise data), remain separate "sister" groups.

I'll begin instituting the new membership policy on the 15th of December, 2012. If you do decide to not continue your DOITCloud membership I want to say this:

"Thank you for your time spent in the DOITCloud group. It's been fun. It's been informative. It's been relevant. I hope that we connect again later on down the road. Safe travels."

If you have any questions, comments or anything else on your mind that you think I should read, please post them here. Thanks, again.