Print

Print


The other point is that it is fascinating that this stream is  
continuing to generate such interests...

Roger
On 8 Feb 2008, at 20:55, Roger Harnden wrote:

> Rod, to the point as always.
>
> My point was that the VSM might not be the tool to analyse the  
> problem - it was not that I am happy with accidents that might  
> include myself among others.
>
> There are many strands going on in this thread. There are observer  
> determined points, there are SSM points, there are 'hard' and 'soft'  
> points.  As has been said before, the VSM is not a universal  
> panacea. The VSM is not a 'tool for stopping air crashes'. It never  
> was, and it never will be. Human beings and 'messy' systems are  
> involved, as well as technical ones. The VSM might well help  
> highlight the overall situation. It will not in itself give an  
> answer. It is not a design blueprint for the 'perfect automated  
> system'. That's all I meant to say.
>
> Roger
>
>
> On 7 Feb 2008, at 22:58, ROD THOMAS wrote:
>
>> Hi Nick,
>>
>> Yes, just dipping in and out of this thread over the weeks has  
>> revealed quite a great deal of confusion as to just what is offered  
>> by cybernetics to the prevention of failure. As if it offers the  
>> magic spell to ward off all harmful possibilities!
>>
>> As I understand it, 'stability' is the cybernetic term for the  
>> output (that is of interest to us) remaining within acceptable  
>> bounds. Notwithstanding Roger's implied observation that from an  
>> ecological viewpoint an air traffic crash may be no bad thing, I  
>> think most people's intentionality to the world would call it a  
>> 'disaster' or 'accident'. Hence, to those people, cybernetics would  
>> say that 'stability' can sometimes be achieved by what Stafford  
>> sometimes called 'implicit control' - fast action, continuous and  
>> ideally automatic, negative feedback. As I undersatnd it, this is  
>> what the Wright brothers achieved - they designed an aircraft that  
>> could not fly by itself, instead they introduced a pilot to offer  
>> feedback adjustments that counteracted tilt or dip. As De bono once  
>> wrote, 'their eyesight and seat of pants' completed the feedback  
>> loop. However, that cybernetic advance did not overcome the many  
>> disturbances to flight that are not overcome by eyesight and seat  
>> of pants.
>>
>> Ultrastability still relies on feedback, but as I understand it,  
>> its where there are a number of interacting feedback loops that  
>> continually act to reconfigure until all sub-sytems are within  
>> stable zones: an equilibrium for the system as a whole. This means  
>> that control may not be located in a single controller - simply  
>> monitoring horizon and seat of pants - it may be distributed  
>> throughout the structure of feedback relationships. Hence a  
>> disturbance to any one system, potentially regardless of cause,  
>> will result in a series of changes that have no end until the whole  
>> system recovers an equilibrium state. This was Ashby's machine -  
>> strangely enough built from surplus RAF equipment. No doubt modern  
>> aircraft have these kinds of arrangement: with all their red  
>> warning lights etc.
>>
>> But obviously (?) even ultrastability can't thwart a devastating  
>> missile or a bomb that destroys the homeostatic configuration.
>>
>> I remember Stafford used to talk about Ashby's Law and airport  
>> security - his example was that his cigar case was an imaginery  
>> bomb and no-one once looked in it when he checked in at the  
>> airport. What are we to do - go through security bollock naked? But  
>> as every special forces soldier knows - the body itself has one or  
>> two natural suitcases.
>>
>> So in short, we are (wo)men not gods.
>>
>> Rod Thomas
>>
>> Nick Green <[log in to unmask]> wrote:
>> Dear Paul
>>
>> How about this:
>>
>> Ultrastability is a desideratum of a system seen as a number of  
>> interacting
>> homoeostats. Clearly any perturbation if big enouigh will destroy  
>> coherence.
>> A large enough meteor crashing on earth for example could end
>> mammalian/human life.
>>
>> Ashby set up a technical (notationally over rich perhaps)  
>> description of
>> ultrastability in his "Design for an Intelligence  
>> Amplifier" (Automata
>> Studies ed Shannon and McCarthy Princeton UP 1956) and embodied in  
>> the many
>> stable states achievable in his hardware homeostat (the step  
>> function of
>> which we may see as System 4) in his "Design for a Brain". There is a
>> feeling that redundancy is important, reflected in Ashby's Law of  
>> experience
>> (and his technical idea of cylindrance isomorphic, perhaps, to  
>> Stafford's
>> paradigm of the logical search in Brain).
>>
>> But having said all that I'm not sure we can say much with  
>> certainty about
>> the future. A number of small unexpected perturbations all within  
>> bounds
>> might defeat any control policy. With adequate Variety engineering  
>> at least
>> we can monitor what we know to be these critical variables and  
>> their bounds,
>> simulate worst cases and deal with problems as they arise in as  
>> timely a
>> manner as possible -but that's probably more than most would say.  
>> We laugh
>> at the obvious errors made by the "Wobbly Bridge" designers but can  
>> we
>> honestly say we can produce designs that will never go into  
>> destructive
>> oscillations (like the sub-prime credit errors threatens to)?
>>
>> However there are simple fundamental checks at present that are not  
>> done and
>> these we can tackle with some certainty. What is the flux of CO2 over
>> desert, sea, pampas and rain forest? We dont know. What was the  
>> cost of risk
>> in sub-prime lending? We didn't know. What are the daily outcomes of
>> medicating patients again we don't know- but we could know. All we  
>> can do
>> (Chaitin like) is decrease the probability of halting (or going  
>> extinct,
>> non-viable) by adding variety (men, machines, money) where our  
>> quantitative
>> models, always improving, suggest it is most needed. In effect from  
>> VSM we
>> set up a transparent structured heuristic for survival.
>>
>> Incidentally if anybody wants a textbook on Risk I have been using  
>> Bedford
>> and Cooke "Probabilistic Risk Analysis: foundations and  
>> methods" (Cambridge
>> UP 2001) for some years now and, at least, it makes me feel better.
>>
>> I once asked a Chem Eng friend who had been doing the Risk analysis  
>> for
>> Sizewell B what he did. "Oh", he said "you know". Well I didn't and  
>> that is
>> why I asked but it turns out they just looked at scenarios and  
>> probabilities
>> (sometimes of rare events - which can be tricky); added some (for  
>> logical
>> OR), multiplied others (for logical AND) and answered questions  
>> like "the
>> chances of aircraft dropping out of the sky onto the reactor", the  
>> pressure
>> vessel failing and the way a smoke plume would drift. That is one  
>> way of
>> simulating future worst cases and hence managing the future. System  
>> 4 is
>> asked is the containment vessel strong enough? What do we do if it  
>> isn't?
>> What are the chances of it failing due to excessive perturbation  
>> in, say, 25
>> years?
>>
>> Best
>>
>> N.
>>
>> ----- Original Message -----
>> From: "Paul Stokes"
>> To:
>> Sent: Friday, January 18, 2008 1:43 AM
>> Subject: Fw: System failure
>>
>>
>> > Arthur,
>> >
>> > It is my understanding that for a well-designed cybernetic system  
>> you do
>> > not
>> > need to specifiy in advance causes of possible future disturbance  
>> to the
>> > system.
>> >
>> > It would be a very interesting exercise though to specify an  
>> ultrastable
>> > (Ashby) aircraft capable of dealing with any possible source of
>> > disturbance.
>> > Sounds impossible? Any takers?
>> >
>> > Paul
>> >
>> >>
>> >> ----- Original Message -----
>> >> From: "Arthur Dijkstra"
>> >> To:
>> >> Sent: Thursday, January 17, 2008 4:52 PM
>> >> Subject: Re: System failure
>> >>
>> >>
>> >>> Thanks Stuart and all,
>> >>> Yes I have read the Perrow's book. Because of complexity and  
>> coupling we
>> >>> can
>> >>> expect failures. I the safety management system (SMS) these  
>> failures
>> >>> should
>> >>> be anticipated and avoided or controlled. I want to work  
>> backwards, so
>> >>> from
>> >>> the accident, via the conditions into the organisation to find
>> >>> precursors
>> >>> and control them. The way you understand accidents shape the  
>> way you try
>> >>> to
>> >>> prevent them. For now I want to describe accident in cybernetic
>> >>> language.
>> >>> Regards,
>> >>> Arthur
>> >>>
>> >>>
>> >>> -----Oorspronkelijk bericht-----
>> >>> Van: Forum dedicated to the work of Stafford Beer
>> >>> [mailto:[log in to unmask]] Namens Stuart  
>> Umpleby
>> >>> Verzonden: donderdag 17 januari 2008 17:25
>> >>> Aan: [log in to unmask]
>> >>> Onderwerp: Re: System failure
>> >>>
>> >>> Probably you know about Charles Perrow's book Normal Accidents,  
>> 1984.
>> >>> As I recall, he claims that if the number of elements that can  
>> fail is
>> >>> large and the interconnections among elements is large,  
>> occasional
>> >>> failure is "normal." Stated differently, complexity can be a  
>> cause of
>> >>> failure. Back up systems prevent a crash due to the failure of a
>> >>> single component. Hence, several things need to go wrong at the  
>> same
>> >>> time to cause a crash. So, one looks for combinations of  
>> failures and
>> >>> factors which cause several components to fail at once.  
>> Perrow's book
>> >>> was widely read in the months and years before y2k.
>> >>>
>> >>> On Jan 17, 2008 9:39 AM, Arthur Dijkstra
>> >>> wrote:
>> >>>> Hi Frank and others,
>> >>>> Thanks, I am aware of this. The challenge is to relate data  
>> from the
>> >>>> operational flights and the organisation to the probability of a
>> >>>> accident.
>> >>>> Therefore I need a exhaustive list of possible ways to crash a  
>> aircraft
>> >>> from
>> >>>> a cybernetic perspective.
>> >>>> Regards,
>> >>>> Arthur
>> >>>>
>> >>>> -----Oorspronkelijk bericht-----
>> >>>> Van: Forum dedicated to the work of Stafford Beer
>> >>>> [mailto:[log in to unmask]] Namens Frank
>> >>>> Verzonden: donderdag 17 januari 2008 15:33
>> >>>> Aan: [log in to unmask]
>> >>>> Onderwerp: Re: System failure
>> >>>>
>> >>>>
>> >>>> Dear Arthur,
>> >>>> whilst this is not a cybernetics approach I think it could be  
>> useful.
>> >>>> It's
>> >>>> more front line but tells its own story..
>> >>>>
>> >>>> Extract from article
>> >>>> [...] But with so few crashes in recent years, air carriers and
>> >>>> regulators
>> >>>> have been trying to find other ways to identify potentially  
>> dangerous
>> >>>> trends. Instead of digging through debris, they now spend far  
>> more time
>> >>>> combing through computer records, including data downloaded from
>> >>>> thousands
>> >>>> of daily flights and scores of pilot incident reports.
>> >>>>
>> >>>> The information is stored on banks of computers, such as the  
>> server
>> >>>> housed
>> >>>> in a windowless office of a US Airways hangar here. Like its
>> >>>> counterparts
>> >>> at
>> >>>>
>> >>>> other carriers, a small team of pilots and analysts sift through
>> >>>> thousands
>> >>>> of records daily looking for the seeds of the next big air  
>> crash.
>> >>>>
>> >>>> In recent years, the team has uncovered such potential safety  
>> problems
>> >>>> as
>> >>>> unsafe landing and takeoff practices and difficult landing  
>> approaches.
>> >>>> The
>> >>>> data have helped pinpoint areas that pose an increased risk of  
>> midair
>> >>>> or
>> >>>> ground collisions and have led to the discovery of a large  
>> bulge in the
>> >>>> runway of a Vermont airport. Even after threats have been  
>> reduced, US
>> >>>> Airways' executives and pilots say they keep monitoring the  
>> data to
>> >>>> ensure
>> >>>> that their new procedures work.
>> >>>>
>> >>>>
>> >>> http://www.washingtonpost.com/wp-dyn/content/article/2008/01/12/AR2008011202
>> >>>> 407.html
>> >>>>
>> >>>> Hope this helps.
>> >>>>
>> >>>> Regards
>> >>>>
>> >>>> Frank Wood
>> >>>>
>> >>>> ----- Original Message -----
>> >>>> From: "Arthur Dijkstra"
>> >>>> To:
>> >>>> Sent: Thursday, January 17, 2008 2:10 PM
>> >>>> Subject: System failure
>> >>>>
>> >>>>
>> >>>> > Dear all,
>> >>>> > In my project to develop a Safety Management System for  
>> aviation I am
>> >>>> > evaluating different categories to describe aircraft  
>> accidents. Using
>> >>>> > cybernetics, I want to make a exhaustive and usable list of  
>> the way
>> >>>> > an
>> >>>> > aircraft can crash. Sort of 50 ways to crash your  
>> aircraft :-) Usable
>> >>>> > means
>> >>>> > in this context that in an organisation events can be  
>> related to the
>> >>>> > possible accidents. As a cybernetician how would you build  
>> such a
>> >>> category
>> >>>> > (hierarchy of categories) to describe the possible accident  
>> types ?
>> >>>> >
>> >>>> > Thanks for your response,
>> >>>> > Arthur
>> >>>> >
>> >>>> > For more information go to: www.metaphorum.org
>> >>>> > For the Metaphorum Collaborative Working Environment (MCWE)  
>> go to:
>> >>>> > www.platformforchange.org
>> >>>> >
>> >>>> >
>> >>>> > ---
>> >>>> > avast! Antivirus: Inbound message clean.
>> >>>> > Virus Database (VPS): 080116-1, 01/16/2008
>> >>>> > Tested on: 1/17/2008 2:16:49 PM
>> >>>> > avast! - copyright (c) 1988-2008 ALWIL Software.
>> >>>> > http://www.avast.com
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>>
>> >>>> For more information go to: www.metaphorum.org
>> >>>> For the Metaphorum Collaborative Working Environment (MCWE) go  
>> to:
>> >>>> www.platformforchange.org
>> >>>>
>> >>>> For more information go to: www.metaphorum.org
>> >>>> For the Metaphorum Collaborative Working Environment (MCWE) go  
>> to:
>> >>> www.platformforchange.org
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Stuart Umpleby, Research Program in Social and Organizational  
>> Learning
>> >>> 2033 K Street NW, Suite 230, The George Washington University,
>> >>> Washington, DC 20052
>> >>> www.gwu.edu/~umpleby, tel. 202-994-1642, fax 202-994-5284
>> >>>
>> >>> For more information go to: www.metaphorum.org
>> >>> For the Metaphorum Collaborative Working Environment (MCWE) go  
>> to:
>> >>> www.platformforchange.org
>> >>>
>> >>> For more information go to: www.metaphorum.org
>> >>> For the Metaphorum Collaborative Working Environment (MCWE) go  
>> to:
>> >>> www.platformforchange.org
>> >>>
>> >>
>> >
>> > For more information go to: www.metaphorum.org
>> > For the Metaphorum Collaborative Working Environment (MCWE) go to:
>> > www.platformforchange.org
>> >
>> >
>> >
>> > --
>> > No virus found in this incoming message.
>> > Checked by AVG Free Edition. Version: 7.5.516 / Virus Database:
>> > 269.19.5/1228 - Release Date: 16/01/2008 09:01
>> >
>>
>> For more information go to: www.metaphorum.org
>> For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org
>>
>> For more information go to: www.metaphorum.org For the Metaphorum  
>> Collaborative Working Environment (MCWE) go to: www.platformforchange.org
>>
>>
>>
>
> For more information go to: www.metaphorum.org For the Metaphorum  
> Collaborative Working Environment (MCWE) go to: www.platformforchange.org
>
>
>


For more information go to: www.metaphorum.org
For the Metaphorum Collaborative Working Environment (MCWE) go to:  www.platformforchange.org