Saturday, May 24, 2014

MV* with Lazarus: between Presenter and ViewModel

The MVC conundrum


Sooner or later a programmer will get in touch with the acronym MVC (Model-View-Controller). Despite its ubiquitous presence in discussions or articles about code design, there's few comprehensive examples of using this pattern with Delphi / Lazarus. Most of the examples are just "one form application" that does not show how to organize a large scale application. There's not even a common pattern between them, some have reference to the controller in the view while others do the opposite.

This is not an object pascal exclusive issue. Other languages have the same problem and the reason is simple: the MVC as was designed to Smalltalk decades ago does not fits naturally in the modern, event driven, GUI architecture. The MVP (Model-View-Presenter) and its variations Passive View and Supervising Controller updates the pattern to match the requirements of today user interfaces. There's also Presentation Model and its most famous deviation MVVM (Model-View-ViewModel).

Meeting the presentation layer


All in all, the objective of all these patterns are to separate the presentation from the business layer thus facilitating the code maintenance. The difference lies in the responsibility of each presentation layer component and how they interact with the model (business object). In order to improve the architecture of my Lazarus projects, i found that the MVP is the most doable to be used with object pascal. There are good examples of MVP/PassiveView with Delphi that could be easily adapted but, in my opinion, is overkill and counterproductive to define read and write properties for each GUI element.

I have forms as simple as seem below

procedure TAppConfigViewForm.FormShow(Sender: TObject);
begin
  BaseURLEdit.Text := Config.BaseURL;
end;

procedure TAppConfigViewForm.SaveButtonClick(Sender: TObject);
begin
  Config.BaseURL := BaseURLEdit.Text;
  Config.Save;
end;

Having to define a view and a presenter interfaces and implement a presenter to such a simple view is a no-no to me. On the other hand, in complexes views, handling the GUI logic in a separated component is worth the work.

With these in mind, i defined an interface (IPresentation) to abstract how a view (TForm) is configured and show. To use just reference one by a string id, call SetProperties to set published properties and ShowModal to show it.

  
var
  Presentation: IPresentation;

Presentation := PresentationManager['myview'];
Presentation.SetProperties(['ConfigProp', FConfig]).ShowModal;

The presentations are registered to a specialized IoC container through two overloaded methods:

  
IPresentationManager = interface
  procedure Register(const PresentationName: String; ViewClass: TFormClass);
  procedure Register(const PresentationName: String; PresenterClass: TPresenterClass);
end;

Both has a PresentationName argument that will identify the presentation. The first overload accepts a TFormClass, the view is instantiated directly and there's no presenter. The second, accepts a PresenterClass that will be responsible to show the view.

This is how the presenter and view classes looks:

//presenter 
interface

  TNutritionEvaluationPresenter = class(TBasePresenter)
  public
    function ShowModal: TModalResult; override;
    function CanImportPreviousEvaluation: Boolean;
    procedure ImportPreviousEvaluation;
    procedure SaveEvaluation;
    property EvaluationData: TJSONObject read GetEvaluationData;
  end;

implementation

uses
  NutritionEvaluationView;


function TNutritionEvaluationPresenter.ShowModal: TModalResult;
var
  View: TNutritionEvaluationViewForm;
begin
  View := TNutritionEvaluationViewForm.Create(nil);
  try
    View.Presenter := Self;
    Result := View.ShowModal;
  finally
    View.Destroy;
  end;
end;

//view
interface

uses
  NutritionEvaluationPresenter;


  TNutritionEvaluationViewForm = class(TForm)
  [..]
  published
    property Presenter: TNutritionEvaluationPresenter read FPresenter write SetPresenter;
  end;

procedure TNutritionEvaluationViewForm.ImportPreviousLabelClick(Sender: TObject);
begin
  FPresenter.ImportPreviousEvaluation;
end;

procedure TNutritionEvaluationViewForm.SaveButtonClick(Sender: TObject);
begin
  FPresenter.SaveEvaluation;
end;

procedure TNutritionEvaluationViewForm.FormShow(Sender: TObject);
begin
  ImportPreviousLabel.Visible := FPresenter.CanImportPreviousEvaluation;
  //update GUI with evaluation data
end;


The Presenter here is acting more like a ViewModel (expose data, state, operations to view) than a true presenter. It works fine but with serious caveats:
  • The view and the presenter know each other which defeats the purpose of independent implementations. Also is not possible to hold a view reference in presenter interface (circular unit reference)
  • The TForm presenter property must be set manually (subject to forget)
  • Registering a TForm class that expects a presenter directly will crash since there'll be no presenter

Interfaces and conventions to the rescue


I was not not really satisfied with the above approach, so reworked the code and got the following design:

  • The presentation register method now has three arguments: name, view class and presenter class (optional). When the presenter class is not defined, the view is instantiated directly
  • The view (TForm) is show by the internal code. No need to the presenter do it.
  • If a presenter class is specified, the view class must define a published property named Presenter. An error is throw if the property does not exists or if is of an incompatible type
  • The presenter property can be declared as a interface also, allowing to completely decouple the presenter from the view implementations
  • There's the possibility to bind a view instance to a presenter property. Not implemented since, until now I did not need.
So much talk. The current code can be found here  and a example how I use it here.

Wednesday, February 19, 2014

Thoughts about application architeture with Lazarus

The Delphi books of my days (or why i'm not guilt of my application's poor design)

As most of Lazarus developers, i started to code in Delphi (in fact i learned computer programming with turbo pascal) and to get most of the tool i read some books, i bought three or four and read part of others in bookstores. This is supposed to be a good practice when learning a new technology.

The problem, noticed by me only years later, is the lack of teaching of good application design like separation of concerns (view, business, persistence layers) and how to achieve them with Delphi. Most of the books focused in the visual aspect (how to create a good looking form, reports etc) and how to setup datasets and the db aware controls. The closer to a good practice advice was putting datasets and datasources in data modules instead of forms.

We can't even blame the book authors. The Delphi's greatest selling point was (is?) the Rapid Application Development (RAD) features.

Recipes for a bulky spaghetti

In early days, when developing my applications, i was a diligent student: i put database logic in data modules and designed the forms as specified in the books. But, as all developers that created applications with more than three forms knows, things started to get hard to evolve and maintain.

Keeping the database components in data modules did not help much. You end with shared dataset states, and all problems that comes with it, across different parts of applications.

Below is a data module's snapshot of my first big application (still in production, by the way).

It could be even worse if i had not started to use a TDataset factory in the middle of development 
In the end, the project has code like:

  // a form to select a profile
  DataCenter.PrescriptionProfilesDataset.Open;
  with TLoadPrescriptionProfileForm.Create(AOwner) do
  try
    Result := ShowModal;
    if Result = mrYes then
      DataCenter.LoadPrescriptionProfile;
  finally
    DataCenter.PrescriptionProfilesDataset.Close;
    Destroy;
  end;
  
  //snippet of DataCenter.LoadPrescriptionProfile (copy the selected profile to PrescriptionItemsDataset)
  with PrescriptionItemsDataset do
  begin
    DisableControls;
    try
      FilterPrescriptionProfileItems(PrescriptionProfilesDataset.FieldByName('Id').AsInteger);
      while not PrescriptionProfileItemsDataset.Eof do
      begin
        NewMedication := PrescriptionProfileItemsDataset.FieldByName('Medication').AsString;
        if Lookup('Medication', NewMedication, 'Id') = Null then
        begin
          Append;
          FieldByName('PatientId').AsInteger := PatientsDatasetId.AsInteger;
          FieldByName('Medication').AsString := NewMedication;
          FieldByName('Dosage').AsString := PrescriptionProfileItemsDataset.FieldByName('Dosage').AsString;
          [..]
          Post;
        end;
        PrescriptionProfileItemsDataset.Next;
      end;
      ApplyUpdates;
      PrescriptionProfileItemsDataset.Close;
    finally
      EnableControls;
    end;
  end;

Its not necessary to be a software architect guru to know that this is unmanageable

Eating the pasta with business objects and inversion of control

In the projects that succeeded the first one, most of the data related code is encapsulated in business objects. The data module does not contain TDataset instances anymore, it's responsible only to act as a TDataset factory and to implement some specific data action. To work with dataset it's necessary just reference one from a key which leads to code like the below:

  FWeightHistoryDataset := DataModule.GetQuery(Self, 'weighthistory');
  FWeightHistoryDataset.ParamByName('prontuaryid').AsInteger := FId;
  FWeightHistoryDataset.Open;

This fixes the shared state issue since each dataset has a clear, limited scope. But does not solve the  business objects dependency of a global instance (DataModule), which  makes testing harder.

In the project that i'm starting, i solved the dependency to the global instance by using the service locator pattern through the IoC Container i cited in a previous post. I defined a resource factory service that is resolved as soon as the business object is created, opening the doors to setup testing environments in a clear manner.

All done?

Not yet. The business logic is contained in specific classes, there's no shared state across application and no hardcoded global dependency but the view layer (forms) is still (dis)organized  in the classic way with each TForm calling and being called by other ones directly. This problem, and the solutions i'm working, will be the subject to a future post.

Saturday, February 15, 2014

Number of units, optimization and executable size

I tend to split my code in small units instead of writing a big unit with lot of classes or functions. The drawback is the large number of files but, in my opinion, the benefits outweight it.


One thing that always bothered me was if this practice has any effect in file size.


Seems not. I wrote two versions of the same program. In the first, all classes (one descendant of another) are defined in the same unit while in the second each class lives in a separated unit. When compiled with the debugging info the separated program is a little bigger. This difference in size does not exist when compiled without debugging info.


As a side note, i noticed that compiling with -O2 flag leads to smaller executables compared with -O1, the Lazarus default. It's just a few kilobytes but worth the note.  





Sunday, February 02, 2014

Using TComponent with automatic reference count

For some time, i know the concepts of Inversion of Control (IoC) and Dependency Injection (DI) as well the benefits they bring but never used them in my code.  Now that i'm a starting a new project from scratch, and the deadline is not so tight, i decided to raise the bar for my code design.

I'll implement an IoC container in the line of VSoft's one. While adding the possibility of doing DI through constructor injection would be great, i won't implement it. It's not a hard requirement of mine and fpc currently does not support the features (basically Delphi's new RTTI) needed to implement it without hacks.

Automatic reference counting


Most of Delphi IoC implementations use COM interfaces and rely on the automatic reference count to manage object instance life cycle. So do i. This approach's drawback is that the class to be instantiated must handle the reference count. When designing new classes or when class hierarchy can be modified, is sufficient to inherit from TInterfaced* classes. The problem rises when is necessary to use a class that has a defined hierarchy and does not handle reference counting, like LCL ones.

Since i plan to decouple TForm descendants, i need a way to use them with the IoC container. Below is the (rough) design, in pseudo code:

//Define interface
  IPersonView = interface
  ['{9B5BBA42-E82B-4CA0-A43D-66A22DCC10DE}']
    procedure DoIt;
  end;

  //Implement an IPersonView
  TPersonViewForm = class(TForm, IPersonView)   
    procedure DoIt;
  end;

  //Register implementation   
  Container.Register(IPersonView, TPersonViewForm); 

  //Instantiate the view
  Container.Resolve(IPersonView)

At first look, it should work seamlessly. And in fact does: a TPersonViewForm is instantiated and returned as IPersonView. The only issue is that the object instance will never be freed even when the interface reference goes out of scope. This occurs because _AddRef and _Release methods of TComponent does not handle reference count by default.

VCLComObject to the rescue


Examining the code, we observe that TComponent _AddRef and _Release forwards to VCLComObject property. There's not good documentation or examples of using this property. So i wrote an example to see if it would solve my problem.

Basically i wrote TComponentReference, a descendant of TInterfacedObject with a dummy implementation of IVCLComObject that gets a TComponent reference in the constructor and free it in BeforeDestruction.

constructor TComponentReference.Create(Component: TComponent);
begin
  FComponent := Component;
end;
procedure TComponentReference.BeforeDestruction;
begin
  inherited BeforeDestruction;
  FComponent.Free;
end;


And this is how i tested:

function GetMyIntf: IMyIntf;
var
  C: TMyComponent;
  R: IVCLComObject;
begin
  C := TMyComponent.Create(nil);
  R := TComponentReference.Create(C);
  C.VCLComObject := R;
  Result := C as IMyIntf;
end;
var
  MyIntf: IMyIntf;
begin
  MyIntf := GetMyIntf;
  MyIntf.DoIt;
end.   

It worked! I get a IMyIntf reference and no memory leaks. Easier than i initially think.

The code can be downloaded here.
















Sunday, July 08, 2012

The cost to supress a warning (and how not pay for it)

In the previous post, i pointed that passing a managed type (dynamic array) as a var parameter is more efficient than returning the value as a function result. However this technique have a known side effect: the compiler outputs a message  (Warning: Local variable "XXX" does not seem to be initialized) each time a call to the procedure is compiled.

The direct way to suppress the warning is change the parameter from var to out. Pretty simple but out does more than inhibit the compiler message. It implicitly initialize managed types parameters to nil or add a call FPC_INITIALIZE if the parameter is a record that has at least a field of a managed type. It does not add implicit code to simple types like Integer or class instances (TObject etc).

Although the performance impact is mostly negligible, is extra code anyway. In my case i initialize the parameter explicitly so out would add redundant code. There's an alternative to suppress the message: add the directive {%H-} in front of the variable that is being passed to the procedure. In the example of the previous post would be:

BuildRecArray({%H-}Result);

It can be annoying if the function is called often or the routine is part of a public API, otherwise is fine. At least for me.

Update: out does not generate initialization code for records that contains only fields which type is not automatically managed by the compiler, e.g., Integer.

Saturday, July 07, 2012

Does it matter how dynamic arrays are passed/returned to/from a routine?

I was implementing a routine that should return a dynamic array and wondered if the produced code of a function and a procedure with a var parameter are different. So, i setup a simple test:

type
  TMyRec = record
    O: TObject;
    S: String;
  end;

  TMyRecArray = array of TMyRec;

function BuildRecArray: TMyRecArray;
begin
  SetLength(Result, 1);
  Result[0].O := nil;
  Result[0].S := 'x';
end;

procedure BuildRecArray(var Result: TMyRecArray);
begin
  SetLength(Result, 1);
  Result[0].O := nil;
  Result[0].S := 'x';
end;

var
  Result: TMyRecArray;

begin
  BuildRecArray(Result); //or Result := BuildRecArray
end.


Looking at the generated assembly revealed that the function version (returns the array in the result) leads to bigger code when compared with the procedure version (pass the array as a var parameter). More: the code difference is due to an implicit exception frame which is known to impact performance.

And what about the caller code? Again the function version generates more code (creates a temporary variable and calls FPC_DYNARRAY_DECR_REF).

In short: yes, it matters.

Thursday, June 14, 2012

The cost of using generics

Since a few versions, fpc provides support for generics. It allows the developer to save some typing and also improves type safety at the time that prevents unsafe typecasts.

Unfortunately it's benefits is not for free.  Every time a generic is specialized, the whole implementation code is copied into the unit/program.

To be more clear, i created two examples that implements a list of a custom class (TMyObj): one uses a TFPList, the other specializes a TFPGList. The difference in usage is that the former needs a typecast.

I compiled both with fpc 2.6.0 under windows. The result is a difference in executable size of 2Kb, the generic version being bigger. Than i looked the generated asm: the code to use the list classes are the same, the difference comes from the copy of implementation of  TFPGList.

Many will say that code size is not a issue anymore given the availability of big hard drives, but i still think that is a good practice seeking smaller code. Regarding generics, it should be used, IMHO, when benefits are clear like classes that are instantiated many times in user (programmer) code and avoided in internal structures of e.g. RTL or third party libraries.