sábado, octubre 17, 2015

Un afilado comentario sobre el estado del uso de modelos

Scott Finnie publica a finales de Septiembre un acertado comentario (a mi juicio por lo menos) crítico sobre el estado y evolución del uso de modelado en la vida real, en sus distintas vertientes actuales. Finnie encuentra que algunas de las tendencias en modelado han evolucionado mal, y prácticamente no encuentra alguna via conceptual o herramienta aplicada que sea totalmente satisfactoria.
Finnie comienza dedicando un párrafo breve a los métodos más formales, que considera en general inaplicables por sus exigencias:
Formal methods can definitely contribute to the “better software” imperative. Any impact on “faster” is a second order effect however: the models have to be translated into working software by hand. And the learning curve can be steep, requiring a solid foundation in the theory and notation of one or more mathematical disciplines (predicate logic, sets, graphs).
En segundo lugar, Finnie se ocupa de los lenguajes específicos de dominio (DSLs) sobre los que señala sus límites de adopción en la dificultad que representa sumir la creación de lenguajes adecuados a una diversidad de problemas por parte de equipos de empresa. Sólo los ve aceptables en nichos de empresas cuyo producto es el software, en general:
Domain-specific approaches can directly address both “better” and “faster”. But they are not without hurdles, both technical and organisational. On the technical front it’s the challenges of language design. Textual approaches (e.g. Spoofax, Xtext, MPS, Rascal) require the designer to understand compiler construction: parsing, linking, semantic analysis, type systems and so on. Graphical approaches such as MetaEdit+ perhaps simplify that. But there’s still the question of designing a language.
The organisational barriers are at least as significant – and independent of the textual/graphical debate. Getting traction for a DSL depends heavily on the organisation’s approach to software. It’s possible in companies building software products, especially those offering related product families. The cost of investing in language design and tooling is justified through repeatability and hence efficiency. But not all software falls into the “product family” bucket. Even when it does, some organisations – and many developers – are nervous about building a proprietary language. Maintainability, recruitment and CV curation can be powerful adversarial forces.
Finalmente, se enfoca en los lenguajes de modelado de propósito general, de los que espera más, pero encuentra grandes dificultades. La principal objeción a este tipo de modelado, enfocándose fundamentalmente en UML, es la incapacidad de casi todas las variantes desarrolladas de pasar de un esquema más o menos definido, a código ejecutable, o como suele decirse, a modelos ejecutables:
The vast majority of UML models are mere sketches. Sketches aren’t working software.
Sketches need lots of human endeavour to translate them into working software. Which isn’t to say they’re bad: a quick diagram on the whiteboard can be invaluable. But it’s a long way from working software. At the height of its hype curve, the UML wasn’t capable of describing precise, executable models(2). Without those, it’s impossible to automate software generation. Without automation, we don’t get better software quicker.
This is the fundamental mistake with MDA:
  1. An incomplete language intended for sketches is not a viable basis for precise, executable models.
  2. Without precise models,
    1. no formal checking can take place. So the impact on “better” is marginal;
    2. no process automation can take place. So the impact on “faster” is at best nil.
Summing up: MDA didn’t deliver better software quicker. It had the hype and the backing of large organisations. It didn’t stick because, brutally, it didn’t work.
So – in the context of general purpose modelling – let’s be clear about this: as long as a manually intensive process sits between a model and working software, the model is no more valuable than a sketch.
Finnie sostiene que hoy están dadas las condiciones para reparar estas faltas. Enumera una lista de asuntos por ajustar y resolver, con final abierto:

  • whilst there are a plethora of tools for building models, few of them support executable models. Of that few, far fewer still are actually rewarding to use.
  • we’re missing the pre-existing models that serve as exemplars. Models that are demonstrably translated into real, working software. Models that can be adapted or reused to meet different requirements.
  • we’re missing the translators that turn those models into working software. Automatically, quickly and repeatably. We have the tools to write those translators: we don’t have the translators themselves. At least not robust, industrial quality translators that produce robust, industrial quality software. That can be used by real users or sold to real customers. Results that looks as good as, and function as well as, ‘hand written’ alternatives. Crucially, those translators need to be open for adaptation.
  • we’re missing the cohesive environments that make it easy. Environments that don’t need weird hacks or obtuse incantations to make them work. Tools that “just work”. Tools that combine the constituent parts for modelling and translation into a consistent, seamless, industrial-quality experience.
  • we’re missing eco-systems that pull these things together to forge communities. Communities that generate interest because they’re doing cool stuff.
Ante esta discusión, que creo que sigue siendo de interés fundamental, desde nuestra comunidad de usuarios de Plex...¿cuántas oportunidades se han perdido  en este camino para lograr una mayor abstracción y alcance?
Nota: si lee el artículo de Finnie, no deje de leer también los comentarios posteriores.

No hay comentarios.: