What is a nanohouse?

The Nanohouse becomes the Dynamic Data Warehouse of the future.

This article originally appeared on the BeyeNETWORK.

What IS a Nanohouse? 

A Nanohouse is a combination of form, function, control and possibly – basic awareness.  It is not implied nor suggested that the awareness component is a function of knowledge, but rather a function of heuristics that understand patterns of information and relevancy to other information that may be encountered. 

The “form” portion indicates two layers: the physical layer, known as the Nanotechnology layer – or the atomic layer.  This is the physical construct in which the information will be stored. Then there is the logical layer – which might be equated to today’s “data model.”  Although a crude approximation, a data model will suffice in terminology.  There is a modeling technique known as the Data Vault which holds promise as the logical “data representation” within the Nanohouse.  With the power of its data attribution, it may be capable of handling multiple physical forms (nanites for lack of a better term,) which house similarly keyed information. 

A good logical model and physical information store is paramount to the success of the layers yet to be defined.  It could make or break the actual Nanohousing concept.  The modeling techniques must have the following characteristics: 

  • Be of repeatable and consistent design;
  • Represent encapsulated information if desired;
  • Be capable of compressing information to the smallest possible mathematical store;
  • Attain a scale-free ability (in other words, the architecture isn’t limited by volume or type of information;) and
  • Be extremely dynamic, and flexible to change without disturbing the data content. 

When I talk about repeatable and consistent design what do I mean?  I am referring to the ability of the data model and physical model to be a repetitive process, regardless of the content it houses.  I am also referring to the ability to “automate” the design portions of the model so that the nanites can be programmed to replicate without thinking.  Of course the parallelism goes along with this, as do the dependencies.  When I examine both 3rd normal form, and Star Schema, there appear to be too many dependencies held within the architecture. These dependencies will keep these designs from being useful on the Nanotech level. 

What about the “Function”? 

The “function” component of the Nanohouse must contain neural net logic, heuristic logic, capable of being executed quickly and made up of extremely small code-fragments.  These code-fragments exist within their own Nanotechnology base, but are NOT re-used.  They must be tightly bound with the data representations they “own”.  In this manner, the code itself can replicate and new data or information can populate the stores, but they must be highly tuned to have the following characteristics: 

  • Always run in parallel;
  • Maintain a constant inflow of information;
  • Filter white-noise from important history facts for itself only;
  • Allow dynamic re-configuration of the information stores within the vicinity;
  • Keep a constant set of heuristic results readily available to score with confidence levels the influx of information; and
  • Be keyed by the information content it represents. 

Special types of functions including history, currency, linkage and set (set operations) will be built into specially-made nanites.  The basic functions such as read, write, security, data check, etc. will be built on lower-levels of the function chain.  Other types of functions including replicate, strip, destroy, and instantiate must also be built.  Again, I’m borrowing here from the biological model of DNA strands. 

There will be rules (more functions) that help to govern which nanites can connect to which others and which ones are “free” to connect to any other object, at any other time.  These rule-sets will help define both free-form dynamic relationship creation and deletion, as well as pre-programmed relationship creation/deletion.  These rules allow nanites to even attach to DNA and other types of atomic level structures. 

Where does the Nanohouse fit in this model?

The Nanohouse is comprised of the form-meets-function nanites.  At one point, they may be able to self-assemble with the other acceptable nanites (acceptable by different criteria).  The resulting information store will be accessed through various means, ranging from machines, to interactive displays, to wave generators and other items we haven’t even considered.  In the first article, The Nanotechnology Revolution, I mentioned that Nanotechnology will make today’s computers seem like Neanderthal toys.  This is true with the Nanohouse – it will make disks, CPU, and RAM disappear.  There may be some macro-computing devices to help us interface with the Nanohouse, but most of this technology will be utilized as a “dumb” type of terminal. 

The Nanohouse becomes the Dynamic Data Warehouse of the future (take an Active Data Warehouse, and add dynamic changes to structure, and content). 

Am I saying that Nanohouses LEARN? 

Yes, in the long run – that is what I’m saying.  They have the rudimentary capacity to learn basic facts about their interactions within their environment, with other data sets and with other Nanohouses.  They may just begin to identify what “white-noise” might be for their own data stores after being guided for a period of time. 

Our information stores may just begin to exhibit interesting behavior based on input.  I am not suggesting that these Nanohouses will become smart, nor am I making the statement that they will become “self-aware.” All I am stating is that with the heuristics and hundreds of thousands of parallel inputs, all of which are active, they may begin to pick up patterns.  As a collective set of nanites within the Nanohouse, they may begin to show reasonable “reaction” to inputs as a result of generating and maintaining heuristics. 

Of course, if the heuristics are wrong, then the reactions will also be wrong.  If the information that is stored is bad, then again, the results will be invalid.  Exploration of the Nanohouse will mean new ways of thinking which will include attempting to define just what “white-noise” is (especially to each individual set of keyed data/function). 

Summary

Now that the Nanohouse or Nanohousing has been defined, it will be interesting to see what evolves.  As Nanotechnology moves forward, the concepts of applying it to data warehousing, information storage and retrieval and other facets of so-called “smart” machines will also move forward.  They will only be as smart as the information they gather and the algorithms that guide them to use or not use the information. 

Nanohousing is coming, whether we like it or not – whether it makes us smile or grimace.  There are ethical issues surrounding its construction as well as its utilization, but it will come.  It is much better to be prepared than to remain ignorant, waiting for something to happen.

I would highly recommend that you read my other articles on Nanotechnology that have not been referenced above. The first article "Nanowarehousing: Nanotechnology and Data Warehousing" is an in-depth discovery discussion on the form versus function debate at the macro-level of computing.  The second article "The Nanotechnology Crossroads" explores the current timeline for the evolution of Nanotechnology. 

In case you're curious, or you're a researcher, or you wish to get in touch with me, I'd love to hear your thoughts, comments and get your feedback. This is a research interest of mine.

  • Dan LinstedtDan Linstedt 

    Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

 

Dig deeper on Data warehouse software

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchBusinessAnalytics

SearchAWS

SearchContentManagement

SearchOracle

SearchSAP

SearchSOA

SearchSQLServer

Close