These forums are currently read-only due to receiving more spam than actual discussion. Sorry.

It is currently Sat Dec 02, 2017 4:09 pm Advanced search

Browsers as UI to Web Cloud Applications.

Here you can discuss stuff that doesn't fit elsewhere; anything you want really.

Browsers as UI to Web Cloud Applications.

Postby BearState » Mon Jun 30, 2008 8:54 am

NOTE: This thread has grown from this first simple post, expanding to describe a new type of web technology and its infrastructural, technical and political characteristics. It is worth skimming through if you don't have time to give it a thorough read. It's not a formal description, but a brain dump of a future definition of the growing application space on the web and what it's possible future might be expected to encompass.


Some tout the power of AJAX as being key to bring full blown applications to the web. I don't see that that's quite enough.

We know that there's a Virtual Machine behind browser technology to interface with the OS and the Web, but ... wouldn't it be nice if the VMOS or Virtual Machine Operating System technology could be built in so that true apps could run through browsers and ... in two paradigms, one that feasts off the client system for resources and the other, finding its resources via the net with both being simultaneously possible.

What kind of RFC parade would that inspire?

BearState
Last edited by BearState on Fri Aug 15, 2008 2:46 pm, edited 3 times in total.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby kapilmunjal » Wed Jul 02, 2008 10:38 am

VMOS looks very interesting...well I never thought of something like this but this idea can work very well...
kapilmunjal
<h6>
 
Posts: 3
Joined: Thu Jun 26, 2008 6:29 am

Postby BearState » Sun Jul 13, 2008 4:50 pm

kapilmunjal wrote:VMOS looks very interesting...well I never thought of something like this but this idea can work very well...


There can be lots of different usage scenarios.

1) By having a VM OS active in a browser, application vendors can have application registration and activation keys server side while the users files and the bulk of the app's code are client side.

2) UI to applications, both in Intranets and the internet, benefits from all the features a browser can provide, but going further.

3) Client side access to OS services, especially disk services means that all sorts of things become possible. The application can seamlessly integrate web based services, such as dictionaries, image resources, document archives, email and video telephony and so forth, going miles beyond what applications are capable of today. Access to disk services means that the user can save stuff client side or on web storage pools. They can allow for greater use of virtual memory in ways not heretofore thought of.

4) Social interaction with applications. Let's just look at gaming for example. A typical game may have all of its state maintained on the server and players update a server database based upon moves they make in the game. But with a VM OS feature in the browser, the game state can be individualized, residing partially or wholy on the client. The game becomes such that plays are made via messaging and each player is not privy to the state of the other player's game space. It's like you are on Star Trek's enterprise. If the communication's video channel is not up, the Klingons can't know what is happening except for what they can see and vice versa. Can you imagine how this would fit to project workgroups?

5) Mashups for Applications! Whoa.

6) Portal Services ... heck let your imagination go wild!


and more.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Tue Jul 15, 2008 6:33 pm

Here's an interesting application for a VM OS based browser ...

Server/Client Role Reversal: While a user has a browser application open, the server providing an application service and API takes on the role of client ... making requests from server side to client side, which the client, responds to, also a role reversal. The most immediate value for this is obtaining status or local state information from the client. Remember that gaming scenario outlined in the previous post where messaging updated games states that were held individually on clients? eh?

Reversal of client/server roles might simply arbitrate another client's need to make requests from some other known cleint or ... for the server itself to make requests for its own purposes.

The next big generation in web/browser technologies would not only allow this closer knit between client and server, but allow over the web control systems. In such a system, both the client and server take active roles in manipulating some application values which might well be automated controls over physical systems.

Koala!
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Wed Jul 16, 2008 3:11 pm

BearState wrote:Koala!


Cool!

There ya go, I came up with a name for this type of browser. If anyone's interested in following my thoughts on this, we can call it a Koala Browser. The Aussies surely won't mind and well, neither will the Ozzies.

I just wanted to patch in some thoughts on resources, where they might originate, what they might be and how they are specified and accessed. That's a big topic for this new browser and one of the most important. I forsee some hard changes in network security to accomodate all these resources, but that's another post. Let's just mention that security of resources needs some work for now.

Location or origin of resources is simple enough to brainstorm.

Serverside resources don't really change much, unless ... the server adapts a greater role for pooling. And that may be a req as clients can go offline and take their resources with them. The structure of server containers will need some added functionality ... that goes without saying.

Clientside, however, the user's client is only a partial source. There is no reason why one client can not access resources from another if the additional clients are known to each other and allow/authorize sharing of certain resources.

So for resources, you have ... User Clients, Remote Clients and Servers as high level locations for resources. Hierarchically breaking things down, things go ...

Server
Application Stores
Pools and Services
Databases
Disk Space
Channels

Client ( User ):
Application Front Ends
DOM Nodes
Flat File and other Databases
Client Side Pools
Local System Resources, including peripherals.

Client ( Remote Group Participant ):
Client Side Pools
Guarded local resource and peripherals


Availability from both server and client would include authorization lists and the ability to dynamically deny or allow resources. But switching to alternate resources due to a remote client going offline or seeing downtime must be accomodated. Critical application core and components must come from a server therefore. The client contributions are mandated to be of a less critical nature and may either be resources that the application can function without, or that have alternatives available.

Let's keep it brief. That's all for now.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Wed Jul 16, 2008 5:24 pm

Here's an interesting link that alludes to the state of affairs when it comes to web applications today ....

http://www.itwire.com/index2.php?option ... 1&id=18725

The link describes how Google forsees the future of applications as migrating to the web. However, this vision of Google's is all tied to existing technologies and in particular, Java technologies. We're talking steep learning curves and capacities that don't even approach what a Koala Browser will be capable of. Google's heart is in the right place, but the infrastructure isn't there for the average web app developer to truly feel their ground. We're on the edge and the big industry players are trying to define how to make the leap.

The truth is that Google can not do it alone. The Koala Browser, if you've deduced anything regarding what I've posted so far, must be a concerted industry effort, to include ... the browser developers, server developers and the big search engine players that want to be the app portals of the future. And of course, the major players in the application and operating system products lineage, have to have their input. But there's more to it. Security is a big issue and at the moment the way that malware is eliminated on the web really needs some serious revamping. More on that in another post.

It's gonna get political. Of that, you can be certain. Real down right political. But the best solution is not going to be one that comes from a single player like Google. All Google can do now is make offers of services that use what currently exists to build their base. The big solution here is an Open one.

Brian L. Donat
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Wed Jul 16, 2008 11:32 pm

CHUNNELLING:

Let's intro a new concept here ... chunnelling.

In Europe, people call the tunnel running from Britain to France the Chunnel ( Channel-Tunnel ). And I hope they don't mind my borrowing the term to apply it to web communications.

Chunnelling, if it's implemented as envisioned, will have a number of features designed to do a number of things which include in order of importance ...

1) web resource load reduction,
2) improved transfer performance and response times,
3) greater security and reduced stream exposure.
4) multi-pathing and simultaneity of multiple requests & responses.
5) stream fault tolerance

To some degree, the first 3 aspects of chunnelling are already in operation today, but not via the mechanism that true chunnelling will utilize. Google for example, allows webmasters to specify target geographic regions for their page index availability. And there-in lies the first big clue to what chunnelling is all about. But chunnelling for web applications goes beyond web page index geographic audience limitations. Chunnelling granulates web traffic to a much finer degree of localization and will only be possible through a combination of both hardware and software infrastructural innovations.

If we're going to rework the web for Koala, we should do it with an eye to achieve the most robust implementation of features as can be done in one technological revolution to make it realy worth the expenditures that will have to be made. This will really stir the economy, that's for sure. Start considering the next high tech stock boom?

Chunnelling does just what the channel tunnel or 'chunnel' does. It provides linkage in the most direct way, but the difference between chunnelling and the real chunnel is that chunnelling will operate mostly, over an open architecture. I say mostly, because there is the possibility of having web super-streams which are dedicated infrastructures between specific locations. So the main gist of the concept is to finely hone addressing and routing to enable application resources to be direct in their web journies.

It becomes obvious how load reduction comes to pass, as well as performance and response times. The security benefits are only partially clear, in terms of reduced exposure of the resource packets. But security is an issue that as stated early in another post, needs revamping. In fact, we'll touch a little bit on the topic now by making some comparisons between now and the envisioned .. then. At the current time malware, spam and other threats are for the most part eliminated on the clients. As such, the greatest fault to web security is that it is implemented as an option. That is, the client may elect not to use security software, be it anti-spyware, anit-virus, anti-spam or some other security solution. And those who do use those solutions are already turning red-faced because with the huge number of signatures, the packages are the worst resource hogs on the client systems. In effect, the useful envelope for this scenario is stressed to the point where the solutions are almost as bad as the problem they seek to solve. It is therefore envisioned that those checks be moved to the web stream. This will undoubtedly create political havoc with publishers of security software solutions. Afterall, it threatens to dry up their commodity customer base. But not necessarily. Depending on how things are implemented, it might well relieve them of overhead, freeing up their packages to effectively police for new threats. IN any case, the stream infrastructure is going to see changes in terms of routing and implementation of security. Hacking will be greatly reduced, effectively taking the 'children' and 'novices' out of the game. Hackers will have to be very serious technological sharpshooters to continue the practice in the future. And then it'll be easy to claim that they are either involved in syndicated commercial espionage or political ( government ) espionage. Driving spammers into extinction is a real possibility.

Multi-Pathing and Simultaneity of mutliple requests and responses might present a more challenging design issue. But this is necessary for the implementation of many features, not the least of which are teleophony, video conferencing and near real-time control systems. We can also consider the multi-pathing features of chunnelling as a fault tolerance feature. The memory structures, software algorithms and physical infrastructure required to be put in place are a bit beyond this post, but the concept is not just feasible, but economically desirable.

So, Koala is something more than a slow moving marsupial that eats eucalyptus leaves.

Brian L. Donat
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Fri Jul 18, 2008 12:41 am

Rich Internet Applications (RIAs) is the umbrella under which current ideology for web applications is evolving.

RIAs are said to be

1) Browser and Network Centric
2) Independent of Operating Systems and the Desktop

and a bunch of other things like

3) openly collaborative
4) service oriented
5) Mash-Ups
blah blah.

Let's debunk some stuff here.

1) OK, RIAs are Browser and Network Centric, but they are also under the current infrastructure, application restrictive.

2) Browsers are not independent of Operating Systems and therefore RIAs are not.
You may have a mobile tie to the internet, but somewhere, there's an OS behind it.
As long as there is a need for disk space and there always will be, there'll be
an OS involved ... somewhere.

3) Cool, RIAs are in fact openly collaborative.

4) Applications are services, so this statement is just plain hype.

5) Now this is a new parade to watch. Yep and it's sponsored by some big players. Coders and webmasters are enticed to intermingle and dance with web objects from diverse origins, hybriding them into their own creations. Why? It's fun. Nah, that's not why. Maybe it ties the webmaster into a particular vendor? Jump on in and join the parade.

Not to undercut the RIA movement, what we really want to look at is ..

- What is possible with today's browser technology
- What can be possible with tomorrow's


Today's browser technology allows RIAs to be paraded as marvelous because of the introduction of certain old paradigms into the web browser arena.

1) AJAX
2) MVC ( Model View Controller ) architectures
3) Objects & Plugins
4) Open Source capacity to introduce new features into a browser
5) Certain new disk availability techniques such as Google's Gears
6) Free usage Shared Features and Libraries via Open Source, GPL, etc.
7) The ability to render web pages with events.
8) XML descriptive packaging of data and XSL translations.
9) Other Stuff ...

Now what you should notice here is that all of this stuff basically goes off in compass wide directions and some of these things could almost be described as band-aided add-ons to the server or to the browser. There are some nice integrated changes that have recently occurred in browser technology like XML and XSL, but ... have no doubt, a lot of this stuff is all over the map and is not a common browser/server model. And that's a problem.

In the current browser technology for example, if you wanted to do something wild and crazy like set up semaphores or shared memory on the client side for an application, by golly, you can in fact do it. But the way you do it will be likely one of a large number of variants that other coders will add-on to the browser for similar needs. It's adhoc, to say the least.

Koala consolidates this wriggly explosion of diverse expansions of web technology into one client/server architecture and does it in a way, that nobody has to invent clever ways to get the browser to behave like a true application base. The methods are universally agreed upon and the Operating System is NOT left out of the picture. The network and operating system are synergistically merged. The operating system takes care of the neat bells and whistles like it always has and the browser is only required to talk to the OS to get what it needs. And that means there can be a standardized interface even for implementing such things as semaphores and shared memory -- FOR BROWSER APPLICATIONS!

Let's keep in mind, that the browser IS itself, an application. BINGO! Does that make sense? Yes. And as an application it depends upon the operating system for a great deal of its resources. And no, by incorporating a VM OS feature into the browser, we will not free it from the OS. The VM OS is an emulator, so to speak, that translates communcation with the OS from a common language to that of the OS. It's also a filter and a service controller. But there's no need for it do a lot of what true operating systems do. It can get all it needs from a true OS. And there's no longer any need to go out looking for somebody else's bell or whistle ... or invent your own. The traditional bell and whistle from the OS already exists and becomes transparently available.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Fri Jul 18, 2008 2:57 am

Why pray tell, can't you divorce the evolution of web applications from the operating system?

And the answer is ...

1) bandwidth
2) response time
3) application integrity
4) code depth
5) idiotic amplification of browser functionality
6) reliability
7) development & SQA

to name a few reasons.

There is no golden rule, nor will there be, that says a web application must only be of such and such size and must abide by such and such rules for complexity. That's true even with today's browser technologies.

The trick here is to create applications that are in a sense hybrids, deriving part of their functionality and resources through the web, the internet and the browser and the bulk via the operating system, the kernel, libraries, DLLs and so forth.

Try to run a really big application through the web and bandwidth is rediculously abused.
And certainly, response times will fall down flat.
And with most of the application dependent upon arrival over the web, the application's hopes for initialization and error free runtime is a dream. Integrity isn't possible.
Code depth, the amount of baggage loaded into the browser would be enormous and the browser's functionality would be stupidly amplified.
Knowing that web pages are difficult to debug, having most of an application running over the net, would be a development and SQA nightmare.

So, the fantastically rich internet application would be dependent upon the operating system which means you'd see one time download scenarios when you subscribe to an application and then, with all the pieces in place, initializing each use would be a snap.

And that brings up another issue - multi-user apps on the same host. It doesn't take a whole lot of imagination to perceive that it would be a fool's game to download that part of the application that must be OS resident, for each user on such a system. There'll need to be an intelligent procedure to make sure that part of the app is installed in a shared part of the system so that users can path to it, if they want to use it. Does that suggest another administration nightmare? It should. But automating those setup processes is possible.

Oh? Is there another nightmare here? Did somebody say something about versioning for different operating systems for the downloaded part the application? Good Question! And there are some solutions. I'll let that be a topic for contemplation.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Fri Jul 18, 2008 4:32 am

Some of several ways to start up a Koala web app ...


Command Line Execution ...

$basename

c>basename.exe


Url Execution ...

http://www.urlname.com
http://www.appname.edu/basename.wap


Passing XML ...

<wapp>
<name>
appname
</name>
<description>
A web application
</description>
<url>
http://www.appname.edu/basename.wap
</url>
<args>
<basename>
appname
<basename>
<arg1>
<flag>
verbose
</flag>
</arg1>
<arg2>
<filename>
resource.lst
</filename>
</arg2>
</args>
</wapp>


Tag ...

<wapp basename=appname>appname arg1 arg2</wapp>


URL web apps of the future Koala Browser might be very similar or even legacy apps as they are right now in contemporary browsers.


But ok, the $5.35 question here is ...

If you start a web app from t he command line, how does it load the html and other web page code? How does it start the browser?

Answer ... Don't be silly, you can write the command to do all the initialization, including starting up the browser and passing the URL. And that happens as part of initializing any client side libraries and code almost simultaneously.

Why not?
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Fri Jul 18, 2008 5:38 am

OK, so let's put in a spot about languages, code and APIs.

At this point it almost natural to wonder if there are some new programming languages on the horizon or what changes will occur to existing languages. The <wapp></wapp> tags for html suggest changes in html. But how about PHP, Javascript, ASP and Java?

First, it should be clear that an awful lot of stock code libraries will go by the wayside. Call it deprecation or obsoletion, whatever. And whether that brings on a hoot for hooray or a hoot of complaint about 'Hey I wrote that code and I'm proud of it', is a matter of let time tell the tale. But deprecation of all those tricks to get at functionality that contemporary browsers don't provide in and of themselves will get the axe. We're not going to see languages get the axe, no. And there may not be any new languages. But it's possible. For sure, we're going to see some additions and modifications to all languages that are in current usage for contemporary browsers. And there may be some other casualties on the deprecation scrap heap besides just code libraries. Consider that once upon a time, there were applets and while there may still be applets, they might as well be extinct in browser land. And so, in Koala land, the new environment is likely to cause some additional extinctions.

It was alluded to in a previous post that there were ways to accomodate the issue of OS versions and code that must run directly on the OS. Well, one of the solutions is not to have the code run directly on the OS. When we talk about a browser with VM OS built into it, the VM OS layer does not have to be a physical part of the browser. This layer can be separate from the browser, but available to it. With the layer separate, it becomes available to run code outside of the browser and ... offline, network independent. Fancy that! But let's make a caveat, there's a performance hit in this scheme. That hit might well be motivation enough to make some application publishers write a lot of their code in the native operating system invironment using languages like C/C++. There is in fact, no reason why they shouldn't be allowed to do this. But the VM OS layer will allow smaller apps to have their code completely written in fully portable form, whatever that language might be, to include scripts.

Ultimately the language changes will evolve around APIs to access the VM OS layer and through it, the native OS. There will also be APIs for new internet features and connectivity. Let's not forget that both the browser and server will be capable of role reversals and other nifty functionality. The HTTP request/response paradigm will get some redefinition and accordingly, addtional HTML structures and tags will come on the scene.

Debugging of code will become more important and changes can be expected in that arena, to include try/catch handlers and other features which some languages do not implement well. Event handlers will have a raised importance and languages will need to evolve to allow trapping events.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Fri Jul 18, 2008 2:12 pm

Client side PHP? Client side JSP? Why that's the most unheard of thing I ever heard of!

Role reversals for the server/client model will cause a migration of traditional server side languages to the client. That means that the web application developer will more than ever need to understand and be competent with both sides of the coin, front-end and back-end.

It is not unheard of today, to have front end specialists and alternatively, backend specialists. The workgroup for web sites today is broken down into

System Administrators
Database Administrators
Database Architects
Back-End Developers
Front-End Developers
Graphic Designers
Web Masters

In some cases, several of these functions are done by the same person or persons. And it's not to say that what todays backend developers do will change. Not all web sites are web applications. But the distinction between front-end and back-end will see some blurring.

How a container object that emulates what tomcat, apache, IIS or others like websphere do, finds its way to the client will be an interesting migration to watch. And databases, client side? Yep. It's going to be possible. These databases won't be huge terabyte creatures, but they will store shared resources which a remote client or a server might be allowed access to. They won't require hands on database administration or even backup, unless its done remotely from the server as a service. These mini databases may in many cases, have a disposable nature and can be recreated with default values if required.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Fri Jul 18, 2008 10:11 pm

Revolution, Change, Chaos and the Politics of watering an economic bloom, now that's a topic. There are people who loathe change, who tend to their castles and guard against the briars climbing up the walls. And then there are people who love an earthquake. Dynamism is a philosophy of change, a motile approach to development which favors adaptation and crossing lines, using both old and new approaches. And while a dynamist would loathe flinging stones, they are inevitably wrapped in a relationship where the castle walls are meant to keep them from being so dynamist. Today's web cultures are a perfect storybook example of this relationship. And without doubt, it applies to the bare bone of bringing Koala to life. And without doubt, change is inevitable. There will be winners and there will be losers.

There is a saying regarding diplomacy and avoiding open warfare that goes like this "Keep the explosions small." In other words when change creates a conflict, attempting to damn off the dynamics driving that change will only build up the potential energy it carries. The longer you hold it back the greater the energy when it finally gets released. Overlapping new technologies and phasing them in allows business fine adjustments amid controlled arguments regarding direction and strategies. Koala is much too big an endeavor for any one entity to achieve on its own and it needs a controlled release and phase in. Collaboration is the rule. The castle gates must be open. The lines must be crossed.

There will be an enormous number of new patents and that means control for whoever holds them. Those patents are conceivably up for grabs right now, for anyone reading this thread. This technology 'land grab' is one source of conflict that creates a bottleneck to the dynamic underpinnings of growing a new technology. Collaboration and keeping the explosions small requires that much of the new technologies need to be in the commons. How that gets moderated, no-one can guess. It'll take strong immediate leadership. Such 'land grabs' on technology are historical with regard to the information technologies industry and in particular, browser technologies. If there's any sure way to bottle up the dynamic forces that would steer the web toward something like Koala, this would be one of them.

The other is fear of loosing the castle, the lands already possessed. Previous posts in this thread have suggested that malware elimination needs to move from the client to the stream. It has been suggested that a lot of code libraries would be deprecated or obsoleted. The companies and the programmers who have put so much into this will face changes. And ... even more still, end users who have already been shocked by version changes in existing technologies are not keen to change. Often changes are viewed as attempts to pull off a smoke screen sort of planned obsolescence. It doesn't have to break. It just has to go out of production and face declining support. And the worse part is, that moving to the new, sometimes breaks the machine. Fear of loosing the castle is a real issue.

So Koala has to be far more than Cool. It has to be sympathetic to those who fear change and in fact those who might be incapable of adapting to change. The new technology must be phased in as an overlap where the legacy methods and infrastructure are kept intact, while the ( cool ) Koala innovations get introduced. The project needs to be managed well with this sympatico concept as one of its foremost ideals.

The entrepeneurial opportunities connected to an open project like Koala are great, far reaching and numerous. There's a lot of work to be done. It can be a spark for an economic boom. That's the great thing about change. Change puts people to work adapting to it.
Last edited by BearState on Sat Jul 19, 2008 4:42 am, edited 1 time in total.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Sat Jul 19, 2008 1:27 am

Let's take a look at Virtual Machine Technology and what it's used for today. These are likely to be the seed sources for the VM OS layer for Koala.


The most well known VM is JVM ( Java Virtual Machine ), Sun Micro System's VM that allows portable code to be written for web interactivity and applications on both the server and client side. This VM is web and browser centric. Apache and Tomcat are built around JVM. IBM's Websphere also incorporates the IBM VM.

The other virtual machines out there emulate operating systems for various purposes, the largest use being managment of large server farms which may incorporate servers with multiple different operating systems. Hypervisors as they are called, allow system administrators to monitor and manage the complexity of these large arrays of servers through a single common OS interface. There are other purposes, such as allowing applications to run across multiple variant operating systems. The largest VM provider of this type is VMWare. IBM has a product known as zVM Hypervisor and Microsoft has been in the VM and OS emulation arena with Virtual PC for MAC OS and Virtual Server 2005. Microsoft is fledging a hypervisor product at this time. SwSoft offers its OS emulator called Virtuoso which allows multiple emulated operating systems to run on one platform.

Finally, the opensource world offers two products to give the general engineering populace the freedom to work with this technology directly. The simplest is BOCHs which allows lightweight emulation of operating system features. Bochs BIOS is used for OS debugging, peripheral emulations and surprisingly, the emulation hobbyist. The other player is XenSource which provides the basis for hypervisor technology to linux publishers such as Suse, Redhat and Novell. Sun Microsystems also utilizes Xen.

My Apologies if I have missed mentioning any company that might also be a player in these technologies. There's no reason to believe that there aren't any start-ups out there that are working toward servicing the needs of hypervisor users and VM users.

The type of technology that the VM OS layer will require is clearly already available, complete down to a population of engineers who understand how to implement the functionality that Koala will require. And that's good. The project management issue doesn't need to task itself with producing a knowledge base, only specs.
Last edited by BearState on Sat Jul 19, 2008 4:48 am, edited 1 time in total.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Postby BearState » Sat Jul 19, 2008 4:03 am

Chunnelling deserves some greater expose', so let's take a peak.


When you consider Diffserv ( Differentiated Services ) in router parlance, you might expect that Chunnelling is already implemented in modern network routing, by another name. The Control Plane in a router already utilizes this feature to provide QoS ( Quality of Service ) guarantees to certain kinds of network traffic, Voice and video for example.

Web applications through Koala will introduce an entire new content in network packets and QoS will be critical in some cases and less critical in others. The path of application exchanges also will vary, client to server, server to client, server to server and ... client to client. Identifying packets needing various treatments is built into the ISO defined layers and may include IP destination information. It's stated that this system is effective enough to allow real-time forwarding of packets. That's encouraging, because for some web applications, control systems in particular, it's going to be necessary to handle real-time effectively over the net.

So what the heck is Chunnelling all about?

In the Koala scenario, you might expect some latency and queuing changes will be incurred. In particular, if malware elimination is moved to the stream, the game would seem to change considerably. But malware detection on the stream does NOT require passing the data through a filter in-line with the stream. So no, there should be no extra latency incurred for the detection phase of malware elimination. And that statement is also revealing about why current anti-virus, anti-spam service directly to clients isn't totally going to go away.

A good programmer believes and knows that software can be used to improve performance. A manager believes it takes bigger and faster hardware. Bigger and faster hardware means more dollars. So algorithmic changes should be preferred to hardware changes. One algorithmic technique involving tautomerism, repeated measures, for repetitive exchanges between two IP addresses is a source of information usable by routers. Tautomeric algorithms can be used to collect and utilize this information on modern routers with regard to user web application specific packets. Strategies on how long to retain the information, what information to collect and utilize is all part of an approach to providing a way of defining direct routes. Such algorithms may in fact already be part of what the Control Plane does, but not particularly for user specific IP to IP exchanges. Routers by craft remain indifferent to the relationships between end points. They involve themselves with paths. Application traffic however, is explicitly repetitious and end points are the crux of what matters to them. Envision for example, dynamic additions to a packet header which record the route, so that the response generated from the destination or further traffic from the source, whatever, has knowledge of the route and packages this up with future packets for the router Control Plane to utilize. The application need not be indifferent to its end points, nor what comes in between. The browser and server effectively participate with network infrastructure in route definition. It beats hands down, implementing a static route with hardware. Such algorithmic considerations might also be ancillary in a participatory role for malware elimination from the stream.

Question: Is there any effort already underway to eliminate malware from the stream?

Answer: MUM! I can neither confirm nor deny that any such effort is currently being made.
User avatar
BearState
<h4>
 
Posts: 35
Joined: Mon Jun 30, 2008 1:02 am
Location: Silicon Valley

Next

Return to Off Topic

Who is online

Users browsing this forum: No registered users and 0 guests