Print

Print


On revisitung Heinz von Foerster's aphorisms a gloomy thought occurs to me:

We have lost our heroes. We are adrift. We are like squabbling offspring -  neither mindfull nor respectful of their work.

I do not see much of their wisdom in our deliberations.

Perhaps we need to revisit Cybernetics 101?

PS
  ----- Original Message ----- 
  From: Nick Green 
  To: [log in to unmask] 
  Sent: Saturday, February 09, 2008 5:39 PM
  Subject: Re: System failure


  Ah yes. See also Aphorism 14 etc at http://www.cybsoc.org/heinz.htm. Tricky stuff! - and the list needs attention.
  Thanks Luc. Re infinite Variety there may be a counting procedure e.g counting voids and non-voids but even so for many cases the variety for a non-void might be very large at the wave mechanical level. Strict treatment of phase might suggest bounds by e.g. computing a non-void as a soliton of some suitable kind- all a bit conjectural though.
    ----- Original Message ----- 
    From: Luc Hoebeke 
    To: [log in to unmask] 
    Sent: Saturday, February 09, 2008 3:36 PM
    Subject: Re: System failure


    Dear all, 


    The word trivial machine came from Heinz von Förster. It is a machine whose transfer function (ie. how the output relates to the input) can be determined. Living systems can best be seen as non-trivial machines, which leads us to the concept of autopiesis, where the concepts of input and output become irrelevant.
    As you rightly points out, Nick, variety calculations are irrelevant for non trivial machines. As a rule of thumb take varuety as infinite.


    Kind regards,


    Luc




    Op 9-feb-08, om 16:06 heeft Nick Green het volgende geschreven:


      Yes indeed. We deliberately construct digital machines out of trivial components and by enforcing the begins and ends of a bit try to make something deterministic, well reliable anyway. But I like matter that learns from any input. A conducting wire is a trivial seeming machine but early in his career dear old Gordon showed wires in an electrochemical solution will self- repair i.e. a broken wire in a solution will regrow if the potential is maintained (photographs etc in his "Approach to Cybernetics"). There seem to be requirements of learning and redundancy for self repair. Actually Stafford mentions this and Gordon calling "It's grown an ear" of a set up they had in Baker Street. I had a quick look but couldn't find anything on trivial machines in what I have of Ashby- do you recall where he raises this? Incidentally with Gordon's self repairing wire, to me at least, it is quite hard to do Variety calculations! Finite recursive begins and ends seem necessary for Variety calculations i.e. you have to know what to count.
        ----- Original Message -----
        From: Stefan Wasilewski
        To: [log in to unmask]
        Sent: Saturday, February 09, 2008 1:25 PM
        Subject: Re: System failure


        How eloquently put. 


        Deep thought time!




        On 9 Feb 2008, at 11:42, Roger Harnden wrote:


          Are we all saying the same thing in slightly different languages, or saying slightly different things in the same language??? 


          I suppose my point is to distinguish VSM as a thinking/reflection tool, from VSM as analysing a situation. And, yes the two are intimately related but not the same. We flip-flop between the two things at every instant of a lives in some form or another. But they are clearly two distinct modes of being.


          And 'learning', as Stefan uses it, is surely an emergent property of this flip/flop between inside and outside, between thought and action. Were there no difference, there would be no learning, no humanness, It is precisely the asymmetrical nature of this that is intriguing and that is allowing us to have such conversations as these.


          Some systems are straightforward. Many systems are not straightforward (was it Ashby who referred to 'trivial' and 'non-trivial' machines?). Trivial machines do not learn - they DO whatever they have been produced to do - and occasionally they will have component failure. At other times - which actually is a generalisation of the last sentence - they will interact with other non-trivial machines, whether in a coherent dance, or otherwise. But any such interaction will entail wear. That is what life is.


          Our thinking, our tools our interventions as non-trivial machines, will entail wear. But because we are reflective beings we can identify such wear, sometimes early enough to avoid breakdown (no accident) sometimes not quite early enough (accident).


          Cybernetic and other tools are aids to us, and in certain instances move our corporeal world towards cyberrnetic perfection - for instance, adaptive systems. Potentially, such adaptive systems might by their nature and by human understanding, be described as 'learning systems'. But, I would assume (although this is a personal not objective statement) that the quality of such 'learning' or 'creativity' will not be human. But perhaps this is short-sighted or wrong (as many science fiction authors have explored).


          But in our imperfect present existence of aircraft crashes, systematic domestic abuse and tribal/racial/political violence, our struggle is towards betterment, and as important, a shared understanding and acceptence of what betterment means.


          For myself, cybernetic ideas help in this respect - in braiding my tendency to be subjective, short-sighted and prejudiced; to the ebb and flow of my environment which is partly constituted through the actions that display the 
          'tendency to be subjective, short-sighted and prejudiced' of others. 


          Whether the specific task is air control betterment, or sustainable development, tools that help us to properly focus on the task-at-hand are as important as any concrete 'tools and spanners'. They help me use such tools properly, and envisage and create perhaps more suitable tools.


          I think my only point is (in line with Stefan's words) that whoever we are and whatever we do, we should not confuse the quality of our tools with perfection, except when a system is able to be conceived, designed and created to function AS a trivial-machine. 


          Certain aspects of the case to hand (risk management and air control) might well be subject to the descriptor of trivial machine. But the whole system certainly will not. That doesn't dismiss or trivialise either machine state. It merely says that our first task is to be ever alert to the distinctions and assumptions we are making


          Roger


          On 9 Feb 2008, at 02:57, Nick Green wrote:


            We can and do pass laws to make accidents less likely -but control society?

            Surely the real question here is how could VSM be applied to the Justice System?  I would have thought rather easily. At least we would know how well the regular attendees, judges, advocates, police, experts, prisons etc performed. At present it's a lottery (we don't have enough good statistics) and (S4) development plans ("reform" to minimise error, risk etc and maximise justice) are hardly clear enough for us,  the paying customers, to debate and choose. From Rawls' "Theory of Justice" we know it's a        matter of striking the right balance between rich and poor- a classic problem of homeostasis in fact- and the problem is justice is a rich man's toy. It needn't be like that and good authenticated evidence is potentially much cheaper to produce these days.
              ----- Original Message -----
              From: ROD THOMAS
              To: [log in to unmask]
              Sent: Friday, February 08, 2008 9:34 PM
              Subject: Re: System failure


              I totally agree - I'm not sure what your point is as I dont think I ever suggested otherwise.
              I suppose such a system would demonstrate that there is nothing left to learn... but that possibility seems remote.
              Rod

              Stefan Wasilewski <[log in to unmask]> wrote:
                What would the point of a system that could control society and all accidents? 


                Where would we have the ability to learn?


                Even 1984 had an escape clause in it and that was one of Orwell's darker hours.


                I think we have freedom for a reason; teach- but let them make their own choices: Like raising children.


                Like Roger I think this discuss has a lot more legs to it.


                Stefan


                On 8 Feb 2008, at 21:21, ROD THOMAS wrote:


                  Hi
                  That's probably because it has a concrete problem that has involved some actual discussion of management cybernetic principles. And we are all helpful sorts wishing to assist Arthur's  PhD.  In my experience, he will need all the help he can get, cybernetics is the last thing you should introduce to a PhD because academia hates it to the marrow. But I have just been dipping in & out and havent had time to read it all.

                  I didnt mean to suggest that you were happy with crash accidents - that would be silly. But I'm sure that at some stage you made a point about failure or disaster being interest relative & that researching it had ethical dimensions - it struck in my mind because you illustrated it with reference to a Lion eating an antelope. It reminded me of CH Waddington having some theory or other that you could base ethics on evolution - not that I'm suggesting that this was your argument.
                  Best,
                  Rod 

                  Roger Harnden <[log in to unmask]> wrote:
                    The other point is that it is fascinating that this stream is continuing to generate such interests... 


                    Roger

                    On 8 Feb 2008, at 20:55, Roger Harnden wrote:


                      Rod, to the point as always. 


                      My point was that the VSM might not be the tool to analyse the problem - it was not that I am happy with accidents that might include myself among others.


                      There are many strands going on in this thread. There are observer determined points, there are SSM points, there are 'hard' and 'soft' points.  As has been said before, the VSM is not a universal panacea. The VSM is not a 'tool for stopping air crashes'. It never was, and it never will be. Human beings and 'messy' systems are involved, as well as technical ones. The VSM might well help highlight the overall situation. It will not in itself give an answer. It is not a design blueprint for the                  'perfect automated system'. That's all I meant to say.


                      Roger




                      On 7 Feb 2008, at 22:58, ROD THOMAS wrote:


                        Hi Nick,

                        Yes, just dipping in and out of this thread over the weeks has revealed quite a great deal of confusion as to just what is offered by cybernetics to the prevention of failure. As if it offers the magic spell to ward off all harmful possibilities!

                        As I understand it, 'stability' is the cybernetic term for the output (that is of interest to us) remaining within acceptable bounds. Notwithstanding Roger's implied observation that from an ecological viewpoint an air traffic crash may be no bad thing, I think most people's intentionality to the world would call it a 'disaster' or 'accident'. Hence, to those people, cybernetics would say that 'stability' can sometimes be achieved by what Stafford sometimes called 'implicit control' - fast action, continuous and ideally automatic, negative feedback. As I undersatnd it, this is what the Wright brothers achieved - they designed an aircraft that could not fly by itself, instead they introduced a pilot to offer feedback adjustments that counteracted tilt or dip. As De bono once wrote, 'their eyesight and seat of pants' completed the feedback loop. However, that cybernetic advance did not overcome the many disturbances to flight that are not overcome by eyesight and seat of pants.

                        Ultrastability still relies on feedback, but as I understand it, its where there are a number of interacting feedback loops that continually act to reconfigure until all sub-sytems are within stable zones: an equilibrium for the system as a whole. This means that control may not be located in a single controller - simply monitoring horizon and seat of pants - it may be distributed throughout the structure of feedback relationships. Hence a disturbance to any one system, potentially regardless of cause, will result in a series of changes that have no end until the whole system recovers an equilibrium state. This was Ashby's machine - strangely enough built from surplus RAF equipment. No doubt modern aircraft have these kinds of arrangement: with all their red warning lights etc.

                        But obviously (?) even ultrastability can't thwart a devastating missile or a bomb that destroys the homeostatic configuration.

                        I remember Stafford used to talk about Ashby's Law and airport security - his example was that his cigar case was an imaginery bomb and no-one once looked in it when he checked in at the airport. What are we to do - go through security bollock naked? But as every special forces soldier knows - the body itself has one or two natural suitcases.

                        So in short, we are (wo)men not gods.

                        Rod Thomas

                        Nick Green <[log in to unmask]> wrote:
                          Dear Paul

                          How about this:

                          Ultrastability is a desideratum of a system seen as a number of interacting
                          homoeostats. Clearly any perturbation if big enouigh will destroy coherence. 
                          A large enough meteor crashing on earth for example could end 
                          mammalian/human life.

                          Ashby set up a technical (notationally over rich perhaps) description of 
                          ultrastability in his "Design for an Intelligence Amplifier" (Automata 
                          Studies ed Shannon and McCarthy Princeton UP 1956) and embodied in the many 
                          stable states achievable in his hardware homeostat (the step function of 
                          which we may see as System 4) in his "Design for a Brain". There is a 
                          feeling that redundancy is important, reflected in Ashby's Law of experience 
                          (and his technical idea of cylindrance isomorphic, perhaps, to Stafford's 
                          paradigm of the logical search in Brain).

                          But having said all that I'm not sure we can say much with certainty about 
                          the future. A number of small unexpected perturbations all within bounds 
                          might defeat any control policy. With adequate Variety engineering at least 
                          we can monitor what we know to be these critical variables and their bounds, 
                          simulate worst cases and deal with problems as they arise in as timely a 
                          manner as possible -but that's probably more than most would say. We laugh 
                          at the obvious errors made by the "Wobbly Bridge" designers but can we 
                          honestly say we can produce designs that will never go into destructive 
                          oscillations (like the sub-prime credit errors threatens to)?

                          However there are simple fundamental checks at present that are not done and 
                          these we can tackle with some certainty. What is the flux of CO2 over 
                          desert, sea, pampas and rain forest? We dont know. What was the cost of risk 
                          in sub-prime lending? We didn't know. What are the daily outcomes of 
                          medicating patients again we don't know- but we could know. All we can do 
                          (Chaitin like) is decrease the probability of halting (or going extinct, 
                          non-viable) by adding variety (men, machines, money) where our quantitative 
                          models, always improving, suggest it is most needed. In effect from VSM we 
                          set up a transparent structured heuristic for survival.

                          Incidentally if anybody wants a textbook on Risk I have been using Bedford 
                          and Cooke "Probabilistic Risk Analysis: foundations and methods" (Cambridge 
                          UP 2001) for some years now and, at least, it makes me feel better.

                          I once asked a Chem Eng friend who had been doing the Risk analysis for 
                          Sizewell B what he did. "Oh", he said "you know". Well I didn't and that is 
                          why I asked but it turns out they just looked at scenarios and probabilities 
                          (sometimes of rare events - which can be tricky); added some (for logical 
                          OR), multiplied others (for logical AND) and answered questions like "the 
                          chances of aircraft dropping out of the sky onto the reactor", the pressure 
                          vessel failing and the way a smoke plume would drift. That is one way of 
                          simulating future worst cases and hence managing the future. System 4 is
                          asked is the containment vessel strong enough? What do we do if it isn't? 
                          What are the chances of it failing due to excessive perturbation in, say, 25 
                          years?

                          Best

                          N.

                          ----- Original Message ----- 
                          From: "Paul Stokes" 
                          To: 
                          Sent: Friday, January 18, 2008 1:43 AM
                          Subject: Fw: System failure


                          > Arthur,
                          >
                          > It is my understanding that for a well-designed cybernetic system you do
                          > not
                          > need to specifiy in advance causes of possible future disturbance to the
                          > system.
                          >
                          > It would be a very interesting exercise though to specify an ultrastable
                          > (Ashby) aircraft capable of dealing with any possible source of 
                          > disturbance.
                          > Sounds impossible? Any takers?
                          >
                          > Paul
                          >
                          >>
                          >> ----- Original Message ----- 
                          >> From: "Arthur Dijkstra" 
                          >> To: 
                          >> Sent: Thursday, January 17, 2008 4:52 PM
                          >> Subject: Re: System failure
                          >>
                          >>
                          >>> Thanks Stuart and all,
                          >>> Yes I have read the Perrow's book. Because of complexity and coupling we
                          >>> can
                          >>> expect failures. I the safety management system (SMS) these failures
                          >>> should
                          >>> be anticipated and avoided or controlled. I want to work backwards, so
                          >>> from
                          >>> the accident, via the conditions into the organisation to find 
                          >>> precursors
                          >>> and control them. The way you understand accidents shape the way you try
                          >>> to
                          >>> prevent them. For now I want to describe accident in cybernetic 
                          >>> language.
                          >>> Regards,
                          >>> Arthur
                          >>>
                          >>>
                          >>> -----Oorspronkelijk bericht-----
                          >>> Van: Forum dedicated to the work of Stafford Beer
                          >>> [mailto:[log in to unmask]] Namens Stuart Umpleby
                          >>> Verzonden: donderdag 17 januari 2008 17:25
                          >>> Aan: [log in to unmask]
                          >>> Onderwerp: Re: System failure
                          >>>
                          >>> Probably you know about Charles Perrow's book Normal Accidents, 1984.
                          >>> As I recall, he claims that if the number of elements that can fail is
                          >>> large and the interconnections among elements is large, occasional
                          >>> failure is "normal." Stated differently, complexity can be a cause of
                          >>> failure. Back up systems prevent a crash due to the failure of a
                          >>> single component. Hence, several things need to go wrong at the same
                          >>> time to cause a crash. So, one looks for combinations of failures and
                          >>> factors which cause several components to fail at once. Perrow's book
                          >>> was widely read in the months and years before y2k.
                          >>>
                          >>> On Jan 17, 2008 9:39 AM, Arthur Dijkstra 
                          >>> wrote:
                          >>>> Hi Frank and others,
                          >>>> Thanks, I am aware of this. The challenge is to relate data from the
                          >>>> operational flights and the organisation to the probability of a
                          >>>> accident.
                          >>>> Therefore I need a exhaustive list of possible ways to crash a aircraft
                          >>> from
                          >>>> a cybernetic perspective.
                          >>>> Regards,
                          >>>> Arthur
                          >>>>
                          >>>> -----Oorspronkelijk bericht-----
                          >>>> Van: Forum dedicated to the work of Stafford Beer
                          >>>> [mailto:[log in to unmask]] Namens Frank
                          >>>> Verzonden: donderdag 17 januari 2008 15:33
                          >>>> Aan: [log in to unmask]
                          >>>> Onderwerp: Re: System failure
                          >>>>
                          >>>>
                          >>>> Dear Arthur,
                          >>>> whilst this is not a cybernetics approach I think it could be useful.
                          >>>> It's
                          >>>> more front line but tells its own story..
                          >>>>
                          >>>> Extract from article
                          >>>> [...] But with so few crashes in recent years, air carriers and
                          >>>> regulators
                          >>>> have been trying to find other ways to identify potentially dangerous
                          >>>> trends. Instead of digging through debris, they now spend far more time
                          >>>> combing through computer records, including data downloaded from
                          >>>> thousands
                          >>>> of daily flights and scores of pilot incident reports.
                          >>>>
                          >>>> The information is stored on banks of computers, such as the server
                          >>>> housed
                          >>>> in a windowless office of a US Airways hangar here. Like its 
                          >>>> counterparts
                          >>> at
                          >>>>
                          >>>> other carriers, a small team of pilots and analysts sift through
                          >>>> thousands
                          >>>> of records daily looking for the seeds of the next big air crash.
                          >>>>
                          >>>> In recent years, the team has uncovered such potential safety problems 
                          >>>> as
                          >>>> unsafe landing and takeoff practices and difficult landing approaches.
                          >>>> The
                          >>>> data have helped pinpoint areas that pose an increased risk of midair 
                          >>>> or
                          >>>> ground collisions and have led to the discovery of a large bulge in the
                          >>>> runway of a Vermont airport. Even after threats have been reduced, US
                          >>>> Airways' executives and pilots say they keep monitoring the data to
                          >>>> ensure
                          >>>> that their new procedures work.
                          >>>>
                          >>>>
                          >>> http://www.washingtonpost.com/wp-dyn/content/article/2008/01/12/AR2008011202
                          >>>> 407.html
                          >>>>
                          >>>> Hope this helps.
                          >>>>
                          >>>> Regards
                          >>>>
                          >>>> Frank Wood
                          >>>>
                          >>>> ----- Original Message -----
                          >>>> From: "Arthur Dijkstra" 
                          >>>> To: 
                          >>>> Sent: Thursday, January 17, 2008 2:10 PM
                          >>>> Subject: System failure
                          >>>>
                          >>>>
                          >>>> > Dear all,
                          >>>> > In my project to develop a Safety Management System for aviation I am
                          >>>> > evaluating different categories to describe aircraft accidents. Using
                          >>>> > cybernetics, I want to make a exhaustive and usable list of the way
                          >>>> > an
                          >>>> > aircraft can crash. Sort of 50 ways to crash your aircraft :-) Usable
                          >>>> > means
                          >>>> > in this context that in an organisation events can be related to the
                          >>>> > possible accidents. As a cybernetician how would you build such a
                          >>> category
                          >>>> > (hierarchy of categories) to describe the possible accident types ?
                          >>>> >
                          >>>> > Thanks for your response,
                          >>>> > Arthur
                          >>>> >
                          >>>> > For more information go to: www.metaphorum.org
                          >>>> > For the Metaphorum Collaborative Working Environment (MCWE) go to:
                          >>>> > www.platformforchange.org
                          >>>> >
                          >>>> >
                          >>>> > ---
                          >>>> > avast! Antivirus: Inbound message clean.
                          >>>> > Virus Database (VPS): 080116-1, 01/16/2008
                          >>>> > Tested on: 1/17/2008 2:16:49 PM
                          >>>> > avast! - copyright (c) 1988-2008 ALWIL Software.
                          >>>> > http://www.avast.com
                          >>>> >
                          >>>> >
                          >>>> >
                          >>>>
                          >>>> For more information go to: www.metaphorum.org
                          >>>> For the Metaphorum Collaborative Working Environment (MCWE) go to:
                          >>>> www.platformforchange.org
                          >>>>
                          >>>> For more information go to: www.metaphorum.org
                          >>>> For the Metaphorum Collaborative Working Environment (MCWE) go to:
                          >>> www.platformforchange.org
                          >>>>
                          >>>
                          >>>
                          >>>
                          >>> -- 
                          >>> Stuart Umpleby, Research Program in Social and Organizational Learning
                          >>> 2033 K Street NW, Suite 230, The George Washington University,
                          >>> Washington, DC 20052
                          >>> www.gwu.edu/~umpleby, tel. 202-994-1642, fax 202-994-5284
                          >>>
                          >>> For more information go to: www.metaphorum.org
                          >>> For the Metaphorum Collaborative Working Environment (MCWE) go to:
                          >>> www.platformforchange.org
                          >>>
                          >>> For more information go to: www.metaphorum.org
                          >>> For the Metaphorum Collaborative Working Environment (MCWE) go to:
                          >>> www.platformforchange.org
                          >>>
                          >>
                          >
                          > For more information go to: www.metaphorum.org
                          > For the Metaphorum Collaborative Working Environment (MCWE) go to:
                          > www.platformforchange.org
                          >
                          >
                          >
                          > -- 
                          > No virus found in this incoming message.
                          > Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 
                          > 269.19.5/1228 - Release Date: 16/01/2008 09:01
                          > 

                          For more information go to: www.metaphorum.org
                          For the Metaphorum Collaborative Working Environment (MCWE) go to:www.platformforchange.org


                        For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 








                      For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 








                    For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 

                  For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 








                For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 

              For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 









------------------------------------------------------------------



              No virus found in this incoming message.
              Checked by AVG Free Edition. 
              Version: 7.5.516 / Virus Database: 269.19.21/1265 - Release Date: 07/02/2008 11:17

            For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 








          For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 








        For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org     









------------------------------------------------------------------------



        No virus found in this incoming message.
        Checked by AVG Free Edition. 
        Version: 7.5.516 / Virus Database: 269.19.21/1267 - Release Date: 08/02/2008 20:12

      For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 








    For more information go to: www.metaphorum.org For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 





----------------------------------------------------------------------------


    No virus found in this incoming message.
    Checked by AVG Free Edition. 
    Version: 7.5.516 / Virus Database: 269.19.21/1267 - Release Date: 08/02/2008 20:12

  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For more information go to: www.metaphorum.org 
  For the Metaphorum Collaborative Working Environment (MCWE) go to: www.platformforchange.org 

  Archive available at https://listserv.heanet.ie/ucd-staffordbeer.html 

  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For more information go to: www.metaphorum.org

For the Metaphorum Collaborative Working Environment (MCWE) go to:  www.platformforchange.org

Archive available at https://listserv.heanet.ie/ucd-staffordbeer.html

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~