I recently read an impressive editorial article that meant to lay bare possible answers for “why do we (as software engineering researchers) usually publish scientific papers?”. Jeff Offutt listed four answers as follows: 1. yet another bullet in a C.V., 2. satisfaction of a local measure for getting a promotion, 3. having an impact on the other researchers in our community by producing implicit knowledge, 4. effectively solving relevant challenges in practices which our community practitioners are doing at the moment! Finally, he mentioned that in order to be effectively influential, we have to write papers that add value to the field, clearly presented and published in a right venue that our community care about. I as a researcher in software engineering field would like to add another possible answer to this question: sometimes in the middle of our research we need to hear back from our community whether we are at the right track or not. We might need their feedback on the spot when we are revising or presenting or conducting some experiments or later when they email you to ask something about it or want to hear recent developments regarding the main approach targeted in that paper or hopefully cite them in their research outcomes. At this stage, all the solution components of our research problem are not fully developed but they are somewhere in between progressing to reach the objectives that have been set. At this stage, these kinds of manuscripts don’t have any impact on our practice at all and the other motivations really shouldn’t make sense in fairly developed contexts. However, they are not totally out of the blue in all environments and communities! By the way, we are all human beings and need some motivations to cheer our research work up a bit 🙂
The problem domain of service identification which is intrinsically included in the larger domain of service-oriented modeling has been put forward by pioneers of service-oriented and cloud computing namely Ali Arsanjani from IBM and Thomas Erl back in 2004.
From then, several researchers have been attempting to develop appropriate solutions for this problem area and according to a systematic review reported by Qing Gu and Patricia Lago (2010), service-oriented computing community researchers and practitioners have been proposed a vast majority of service identification methods which their key missions are determination of the scope of functionality a service exposes to meet business needs, and the boundaries between services to achieve maximum design measures. We proposed our first humble idea at International Conference on Services Computing (SCC) in 2008. Due to the heterogeneity of these methods, practitioners often face the difficulty of choosing a right service identification approach that copes with available resources and fits their needs. These methods rely heavily on the experiences of architects to direct them in the identification of services and architectural elements by descriptive qualitative guidelines. These service-identification methods have proved to be laborious and less productive, given the mere large scale of enterprises and the human limitations in comprehending the mostly non-quantitative and textual service requirements of such large enterprises while deriving proper services out of business models. Based on these challenges and opportunity for improving efficiency, we proposed a novel approach called ASIM (Automated Service Identification Method) for automatically identifying and partly specifying enterprise-level software services from business models using best practices and principles of model-driven software development. Based on these premises, the service identification problem, for which we proposed an automated method, could be formulated as follows: “How good service abstractions (at the right level of granularity) with acceptable technical metrics can be derived automatically from high-level business requirements and process models?”.
We have formulated the service identification as a multi-objective optimization problem and solved it by a novel meta-heuristic optimization algorithm that derives appropriate service abstractions by using appropriate quantitative measures for granularity, coupling, cohesion, reusability, and maintainability. ASIM helps architects to derive right architectural elements of service-oriented solutions that in turn lead to effective service models.
According to Smith (and an untangled interpretation by Charlotte Herzeel), computation can be modeled as a mapping between three distinct domains: a “syntactic” domain, an “internal representational” domain and the “real world”. Syntactical domain consists of description of a system and the internal domain consists of all entities that realize the language constructs by which we describe the system. The real world consists of natural objects and abstract concepts that stakeholders of the system want to refer to in systems, but interpretation of these concepts are obviously out of reach of a computer. Mapping a description to its internal representation is called “internalization” (cf. “parsing”). The mapping from one element in the internal representation to another one in it is called “normalization”. In normalization, an element is reduced to simplest form. The mapping from an internal representation to its meaning in the world is called “denotation”.
Let’s get back to the reflection. Generally, reflection is about inspecting (observing and therefore reasoning about) and changing the internal representation (structural reflection) and also reasoning about the normalization (behavioral reflection) of a system. To be more specific, the inspection of the internal representation is typically called introspection and changing the internal representation and also reasoning about the normalization is well-known as intercession and the mechanism that enable these manipulations is called reification.
The computational reflection as a concept is originated by Brian Cantwell Smith and it was elaborated by Pattie Maes (in “concepts and experiments in computational reflection“) and Gregor Kiczales (in “The Art of the Metaobject Protocol“) and his colleagues in programming languages. Afterward, by emerging new paradigms and the need for distributed and transparent systems by embracing the component-based concepts, the computational reflection found its way all the way down to distributed systems and their underlying middleware infrastructures to make them highly adaptive. As a result of these efforts, a number of reflective component model specifications (Fractal) and middleware implementation (Julia) had been emerged. Finally, by arising the need for dynamic software evolution in self-* systems, the need for explicitly maintaining the architectural description which causally connected to run-time model has been emerged. These concepts have been extensively explored by Walter Cazzola, Peyman Oreizy, Nenad Medvidovic, Richard Taylor, Jeff Magee, Jeff Kramer and their colleagues with the theme of “Architecture-Based Runtime Software Evolution”.
I found this paper (Jeannette Wing, CMU) as a good introductory for whoever that are interested to quickly comprehend what is and what is not “computational thinking“. As software engineers, we have been experiencing that people think about us as a kind of geek that the only task we are able to do is computer programming and there are still so many of them who think the fundamental research in our discipline is done and that only the engineering remains. But this is so untrue. I recommend them to read this masterpiece and its corresponding presentation to realize thinking like computer scientists means more than being able to program a computer. It requires thinking at multiple levels of abstraction!!!! Wow! By the way I extracted this interesting quote from the paper and put it here for convincing you to read it: “Computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system. It is separation of concerns. It is choosing an appropriate representation for a problem or modeling the relevant aspects of a problem to make it tractable. It is using invariants to describe a system’s behavior succinctly and declaratively. It is having the confidence we can safely use, modify, and influence a large complex system without understanding its every detail.”