top of page
NoIcon.png

Sony RCT iPad

Overview

This application is not for common people but for media persons who cover different occasions using Sony Camera, so that they can control their camera remotely using the application. For example, visit this link to see an event covered by this application. There are three versions of this application for Windows OS, MacOS, and iPad. My focus was on the iPad version, where my main module involved developing a parser to display frames on the iPad interface based on information obtained from the SDK. Additionally, the frame movement had to respond to the connected camera, mirroring the user's actions on the iPad screen. This functionality needed to be seamlessly operable on both ends.

​

Due to NDA constraints, I can't disclose more information about the project or add any screenshots or use the app icon from the project. However, feel free to visit their official link to take a closer look at the project.

Role and Responsibilities

I followed a blueprint to implement this feature, drawing inspiration from a similar implementation already done on the macOS version. While the Mac app was developed in ObjC, the iPad version was implemented using SwiftUI. The SDK, developed in C++, was common across all platforms. For Mac and Windows, a communication layer in C++ was developed to facilitate interaction between the front end and the SDK. Additionally, an extra bridge layer in Objective-C++ was created specifically for the iPad to establish communication with the SDK, as the iPad application was developed in SwiftUI.

 

In addition to this, I contributed to the overall navigation of the project within the tab bar, the user interface of the Main Camera Control panel view, and the iCloud upload of saved device files.

Project Duration

From March 2023 to September 2023.

Challenges and Solutions

Lack of Documentation

Challenge: The absence of comprehensive documentation made understanding the blueprint complex, requiring extensive trial and error.

Solution: Utilized the macOS implementation as a reference, extracting insights and sample data to frame the initial structure. Conducted thorough testing to comprehend the functionality and nuances, gradually building a solid foundation for the iPad version.

​

Complex Frame Handling

Challenge: Handling various frame types, each dynamic and specific to different camera options, presented complexities in frame drawing and adjustment.

Solution: Successfully drew frames on the view based on the Mac implementation. Implemented a flexible architecture for frame handling, ensuring adaptability to diverse camera options. The ability to dynamically change and redraw frames was achieved, enhancing the versatility of the feature.

​

Complicated Parser Adaptation

Challenge: The parser developed for Mac, while effective, was overly complicated and required significant adaptation to suit the SwiftUI structure of the iPad application.

Solution: Streamlined and simplified the parser architecture for compatibility with SwiftUI. Identified and addressed unnecessary checks, optimizing the parser for the iPad environment, ensuring efficient data parsing and extraction.

 

Screen Size Conversion (Both for different Cameras and iPads)

Challenge: Ensuring consistent frame positioning and behavior across different iPad screen sizes, despite variations in camera screen sizes, required careful consideration.

Solution: Successfully implemented a conversion mechanism to adapt camera screen size to iPad screen size. This ensured uniform frame placement and responsiveness across different devices, enhancing the user experience.

Technologies

Frontend (iOS App)

Language: Objective-C++

Frameworks: SwiftUI

Design Patterns: MVVM (Model-View-ViewModel)

Others

Version Control: Github

Code Review: Confluence

Project Management: JIRA

devtulon@gmail.com | © 2019 Md Reashed Zamil. All rights reserved.

bottom of page