"Hi there -- I believe that title isn't quite accurate; Apple specifically is referring to behavior of a library called Rollout which lets people dynamically inject Objective-C/Swift. They are doing hot delivery of native, Objective-C code. It's really not about React Native nor Expo.
Expo (and the React Native library we use) doesn't do any of that. We also make sure we don't expose ways to dynamically execute native code such as the dlopen() function that Apple mentioned in that message. We also haven't received any messages from Apple about Expo, nor have we heard of any Expo developers receiving the same." - Exponent Team
Hi, I work on Expo (YC S16) and also am a core contributor to React Native.
Apple's message reads to me that they're concerned about libraries like Rollout and JSPatch, which expose uncontrolled and direct access to native APIs (including private APIs) or enable dynamic loading of native code. Rollout and JSPatch are the only two libraries I've heard to be correlated with the warning.
React Native is different from those libraries because it doesn't expose uncontrolled access to native APIs at runtime. Instead, the developer writes native modules that define some functions the app can call from JavaScript, like setting a timer or playing a sound. This is the same strategy that "hybrid" apps that use a UIWebView/WKWebView have been using for many years. From a technical perspective, React Native is basically a hybrid app except that it calls into more UI APIs.
Technically it is possible for a WebView app or a React Native app also to contain code that exposes uncontrolled access to native APIs. This could happen unintentionally; someone using React Native might also use Rollout. But this isn't something specific to or systemic about React Native nor WebViews anyway.
One nice thing about Expo, which uses React Native, is that we don't expose uncontrolled or dynamic access to native APIs and take care of this issue for you if your project is written only in JS. We do a lot of React Native work and are really involved in the community and haven't heard of anyone using Expo or React Native alone having this issue.
Apple has been fine with WebViews in apps since the beginning of the App Store, including WebViews that make calls to native code, like in the Quip app. WKWebView even has APIs for native and web code to communicate. It's OK for a WebView to call out to your native code that saves data to disk, registers for push notifications, plays a sound, and so on.
React Native is very much like a WebView except it calls out to one more native API (UIKit) for views and animations instead of using HTML.
What neither WebViews nor React Native do is expose the ability to dynamically call any native method at runtime. You write regular Objective-C methods that are statically analyzed by Xcode and Apple and it's no easier to call unauthorized, private APIs.
With Expo all of the native modules are safe to use and don't expose arbitrary access to native APIs. Apps made with Expo are set up to be good citizens in the React Native and Apple ecosystems.
Well, let's be clear here. I use React Native, and it would take me about 30m to write a bridged React Native method that could execute ObjC code dynamically, including accessing all the private APIs you could want.
Yeah. Objective-C is definitely really flexible. From my reading of the warning, the important thing is not to execute Objective-C dynamically or access private APIs regardless of whether you're using React Native, Cordova, a WebView, or even a bare iOS app.
Simple: JIT compilers are banned and so that excludes any modern browser's JavaScript implementation from iOS. But anyone using Apple's JavaScriptCore has nothing to fear.
The rules explicitly forbid any HTML renderer or JS interpreter aside from WebKit, JIT or no JIT. I believe all the popular third party browsers today still use the non-JIT UIWebView rather than WKWebView because the former gives you more control over the request cycle
WKWebView has JIT. In executes the JS in a different process than the hosted app; there the JIT lives, this special process is whitelisted to do JIT magic.
I understand that, the original OP was stating that none of the "browser skins" had JIT because they all used UIWebView, which isn't the case with the link I posted :p.
Apple could potentially close any loopholes here by scanning new apps for their API usage, checking for any 'bad' calls, and then writing the remaining discovered calls into a permissions file that is delivered with the app in the store.
At runtime, any API calls made by the app are checked against this file; if a new API call is found, then it must have escaped Apple's code scanning logic. The API call can be rejected and logged for Apple to improve their scanner.
This is a great idea actually. Actually, isn't google already doing this via SELinux? You give the app a manifest of calls it's allowed to make, and if the call isn't in the manifest the call gets rejected?
SELinux is not that strong. It works on kernel syscall boundaries and some parameters thereof, and those aren't particularly fine grained. Service access is governed by a separate Google API, for example.
Moreover, any random app cannot enhance SELinux policy of the system.
There are many legitimate uses of calling methods and functions using reflection. Expecting to hit all of them in a short review process is comically optimistic for anything but the simplistic of apps.
Your suggestion of enforcing this also makes no sense from performance or privacy standpoint.
Obvious (to me) idea: have the private API access stored as data sent from the server at runtime, rather than code in the reviewed app. Basically the equivalent of eval()-ing a string for front-end javascript code.
Rather than a speculating on what really boils down to semantics once ToS is involved maybe someone could actually try submitting an app and reporting back on whether it triggers the same failure?
This needs a little elaboration. Cached Javascript in any hybrid app is a security hole because that can be exposed through a jailbreak. Depending on how much of your business logic you've pushed into the JS layer to enable that "80% code sharing" that makes managers go all tingly you may be exposing all kinds of things - cached access tokens, API keys and whatnot - to anyone who wants to install your app and mine its secrets.
At that point your phone is fucked, anyhow. If you've lost physical control of the device and an attacker has broken the lock, you're compromised in much bigger ways.
How is this any different from native code? Afaik, you can access the native compiled source code of an app on a jailbroken phone. Sure it's more of a pain to parse through but security through obscurity isn't security.
Definitely, running "strings" on a binary is about as easy as finding the JS for a hybrid app (WebViews or React Native). Depending on your experience it could be easier to extract API keys from an IPA than from JS.
In either case the root issue is about sending secrets like an unscoped API key to the client. "Client secret" is an oxymoron in this context regardless of the programming language.
You'd have to know a few things first, like (1) the IPA is a ZIP file, (2) the ZIP file is actually of a directory and (3) you can dump the actual code in the JS files (if they're in the bundle directory) much easier than you can look for strings from the binary that might look like an API key.
The API key is actually the least of the hazards, since you can hide that in the keychain. Having source code for your business logic shipping in your app is not good; having it be hackable business logic (by changing the JS in place) is very not good.
Shipping source code with business logic (assuming the definition of source code includes obfuscated JS) is how the entire web works today! With WASM the code that is shipped will be even further away from the original source code and really not so different from downloading ARM from the App Store or Java/ART bytecode from the Play Store.
Not the entire web. eBay and Amazon don't put all their algorithms in the browser; PayPal doesn't either. They hide their company jewels behind their API where they're a little more secure. What you see in the browser is the presentation layer code.
Hybrid apps could achieve that same kind of relative business logic security, but at the cost of pushing more and more of the actual business logic behind an API and not in the JS in the app. At that point, the benefits of code sharing (such as they are) get fewer and fewer since it's really pretty easy to write API code in Objective C, Swift, Java or Kotlin.
You aren't "parsing" through compiled (and linked) code; you're decompiling it, which is a much trickier thing to do and get right. Having your app logic in Javascript in the sandbox cache is just serving it up on a plate.
They do, and it's for this reason that the really important stuff isn't in the web page at all - it's behind the company's API. The "business logic" that you can safely push out to a browser or hybrid app is somewhat limited, which means there is a hard upper limit on the real code sharing savings you can get with hybrid apps versus full native.
Yes it is, jailbreaking bypasses critical security features of your phone. Granted, it's required to run certain kinds of software but there are better ways to run your own code on your phone (like getting your own developer certificate) that preserve the security model.
The problem with Apple that you and your customers need to be aware of (or concerned about), is that once a number of your customers sidestep Apple policies w.r.t. pushing or changing features via JavaScript, Apple will change their policies to close that loophole. It's only a matter of time before companies take advantage of this path to sidestep app store approvals.
As a client I'm pretty baffled that some developer had specifically bypassed something that I see as a security measure.
If an app is modified on the fly to use an undocumented and maybe "forbidden by apple" method in order to bypass security features or worse spy on me I'm clearly not ok.
Do you really think the apple ecosystem work because clients see AppStore as a evil cage and that the external developers are all angels with good intentions?
By not allowing developers to patch their app 'on the fly', directly, without going through a new version in the app store (which is a very lengthy process, almost forever in security terms), Apple effectively protects their iOS users from malicious code (not from the dev, which is probably to be trusted if the app is already installed, but from MITM attacks and the likes). However at the same time they deny very hasty security patches, which may compromise the device and all associated data and accounts entirely.
So there's no best world but undoubtingly many aspects to consider. Anecdotally I find that it's often better to trust the developers of an app to maintain their own thing; the OS only being a facilitator, provided the user has control (UAC on Windows, Permissions on mobile, etc.) [note: obviously you trust the OS vendor to patch said OS, it's just a particular kind of app]
I believe carefully crafted update should mostly mitigate that risk.
Also current policy don't prevent app developers to implement a "kill switch" that will prompt user to update (or wait for update) at splash screen and abort loading the malfunctioning version.
Though the security justification here is limited to private APIs & native code pushing, the first few sentences of the rejection definitely seem like the "spirit" of the terms includes any significant functionality pushing at all. Wouldn't be surprised if they ramp up enforcement on that.
The key part of JSPatch is that it exposes arbitrary, uncontrolled access to native APIs. You could use it to call private APIs even because Objective-C doesn't distinguish between public and private APIs at runtime, so Xcode's compiler checks and Apple's static analysis can't anticipate which APIs are possibly called.
In contrast, React Native doesn't expose uncontrolled access to native APIs. You write regular Objective-C methods that are callable from JavaScript, and within your Objective-C methods you write regular Objective-C that is statically checked by Xcode and Apple.
The main reason for React Native to not be impacted is that Facebook + Instagram + Airbnb + Soundcloud + ... are using it and Apple cannot justify to their userbase to not accept those favorite apps for a technical reason.
"You write regular Objective-C methods that are callable from JavaScript," - the objC function that I am calling from JS can be capable of calling a private API received by it in an argument - that breaks the entire claim of reactNative not supporting calls to private API. React Native must also go down.
If you deliberately write code that calls private APIs that are exposed to JS in your React Native app, I expect only your specific app to receive this warning. The same would be true in an app that doesn't use React Native at all, as we've seen with apps using Rollout.
The warning is not about React Native, it's about exposing uncontrolled access to native APIs including private ones and React Native doesn't do that.
See the difference between a regular human who in theory can take a knife and violently rob or kill someone, and human-like robot that can be easily programmed to do the same. Who has to be isolated?
It is all about intents. If app allows that, it is banned for that behavior, not for having React in his kitchen. But if you have armed drone there, then police has a question.
Expo (and the React Native library we use) doesn't do any of that. We also make sure we don't expose ways to dynamically execute native code such as the dlopen() function that Apple mentioned in that message. We also haven't received any messages from Apple about Expo, nor have we heard of any Expo developers receiving the same." - Exponent Team